Test Report: Docker_Windows 22112

                    
                      236742b414df344dfb04283ee96fef673bd34cb2:2025-12-12:42745
                    
                

Test fail (34/427)

Order failed test Duration
67 TestErrorSpam/setup 51.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 517.91
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 373.67
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 53.55
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 53.76
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 53.7
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 740.02
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 53.88
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 20.2
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 5.34
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 124.12
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 242.85
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.78
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 52.62
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.1
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.47
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.49
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.52
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.5
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.48
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 20.18
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell 2.84
360 TestKubernetesUpgrade 844.6
458 TestStartStop/group/no-preload/serial/FirstStart 531.89
483 TestStartStop/group/newest-cni/serial/FirstStart 518.94
497 TestStartStop/group/no-preload/serial/DeployApp 5.68
498 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 119.61
501 TestStartStop/group/no-preload/serial/SecondStart 377.91
503 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 91.77
506 TestStartStop/group/newest-cni/serial/SecondStart 380.95
507 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 545.48
511 TestStartStop/group/newest-cni/serial/Pause 13.62
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 223.45
x
+
TestErrorSpam/setup (51.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-169700 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-169700 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 --driver=docker: (51.0150145s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-169700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22112
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-169700" primary control-plane node in "nospam-169700" cluster
* Pulling base image v0.0.48-1765505794-22112 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-169700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (51.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (517.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0
E1212 19:49:54.762257   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:31.847167   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:31.853830   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:31.866291   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:31.887680   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:31.930507   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:32.012189   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:32.173811   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:32.495799   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:33.137930   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:34.419629   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:36.982250   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:42.104287   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:52.346340   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:53:12.829550   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:53:53.792889   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:54:54.765083   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:55:15.715904   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:56:17.837158   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m35.3121729s)

                                                
                                                
-- stdout --
	* [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Found network options:
	  - HTTP_PROXY=localhost:55770
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:55770
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000372478s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203468s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203468s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 6 (585.0924ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 19:57:22.913525     756 status.go:458] kubeconfig endpoint: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.0737621s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image save --daemon kicbase/echo-server:functional-461000 --alsologtostderr                           │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list                                                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ dashboard      │ --url --port 36195 -p functional-461000 --alsologtostderr -v=1                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list -o json                                                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service --namespace=default --https --url hello-node                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format yaml --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ ssh            │ functional-461000 ssh pgrep buildkitd                                                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ image          │ functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete         │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start          │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:48:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:48:47.041816    9844 out.go:360] Setting OutFile to fd 1556 ...
	I1212 19:48:47.083879    9844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:48:47.083879    9844 out.go:374] Setting ErrFile to fd 1940...
	I1212 19:48:47.083879    9844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:48:47.099835    9844 out.go:368] Setting JSON to false
	I1212 19:48:47.102706    9844 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3065,"bootTime":1765565861,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:48:47.102706    9844 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:48:47.108809    9844 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:48:47.112372    9844 notify.go:221] Checking for updates...
	I1212 19:48:47.112372    9844 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:48:47.115537    9844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:48:47.117847    9844 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:48:47.119670    9844 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:48:47.121711    9844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:48:47.124721    9844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:48:47.266194    9844 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:48:47.272202    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:48:47.499657    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-12 19:48:47.479768288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:48:47.504245    9844 out.go:179] * Using the docker driver based on user configuration
	I1212 19:48:47.507592    9844 start.go:309] selected driver: docker
	I1212 19:48:47.507592    9844 start.go:927] validating driver "docker" against <nil>
	I1212 19:48:47.507633    9844 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:48:47.590979    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:48:47.822120    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-12 19:48:47.804943751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:48:47.822678    9844 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:48:47.823694    9844 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:48:47.826376    9844 out.go:179] * Using Docker Desktop driver with root privileges
	I1212 19:48:47.828245    9844 cni.go:84] Creating CNI manager for ""
	I1212 19:48:47.828245    9844 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:48:47.828245    9844 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1212 19:48:47.828245    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	W1212 19:48:47.828245    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	I1212 19:48:47.828245    9844 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:48:47.831746    9844 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:48:47.835012    9844 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:48:47.838629    9844 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:48:47.841236    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:48:47.841236    9844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:48:47.841236    9844 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:48:47.841236    9844 cache.go:65] Caching tarball of preloaded images
	I1212 19:48:47.841236    9844 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:48:47.841236    9844 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:48:47.841236    9844 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:48:47.842238    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json: {Name:mk22cac5aebf2be97d29e15272a6b3ba415c1a41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:48:47.922828    9844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:48:47.922828    9844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:48:47.922828    9844 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:48:47.922828    9844 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:48:47.923468    9844 start.go:364] duration metric: took 607.6µs to acquireMachinesLock for "functional-468800"
	I1212 19:48:47.923468    9844 start.go:93] Provisioning new machine with config: &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:48:47.923468    9844 start.go:125] createHost starting for "" (driver="docker")
	I1212 19:48:47.927211    9844 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1212 19:48:47.927783    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	W1212 19:48:47.927971    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55770 to docker env.
	I1212 19:48:47.928016    9844 start.go:159] libmachine.API.Create for "functional-468800" (driver="docker")
	I1212 19:48:47.928058    9844 client.go:173] LocalClient.Create starting
	I1212 19:48:47.928754    9844 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1212 19:48:47.928754    9844 main.go:143] libmachine: Decoding PEM data...
	I1212 19:48:47.928754    9844 main.go:143] libmachine: Parsing certificate...
	I1212 19:48:47.928754    9844 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1212 19:48:47.929275    9844 main.go:143] libmachine: Decoding PEM data...
	I1212 19:48:47.929309    9844 main.go:143] libmachine: Parsing certificate...
	I1212 19:48:47.934053    9844 cli_runner.go:164] Run: docker network inspect functional-468800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 19:48:47.987462    9844 cli_runner.go:211] docker network inspect functional-468800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 19:48:47.992700    9844 network_create.go:284] running [docker network inspect functional-468800] to gather additional debugging logs...
	I1212 19:48:47.992700    9844 cli_runner.go:164] Run: docker network inspect functional-468800
	W1212 19:48:48.045868    9844 cli_runner.go:211] docker network inspect functional-468800 returned with exit code 1
	I1212 19:48:48.045868    9844 network_create.go:287] error running [docker network inspect functional-468800]: docker network inspect functional-468800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-468800 not found
	I1212 19:48:48.045868    9844 network_create.go:289] output of [docker network inspect functional-468800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-468800 not found
	
	** /stderr **
	I1212 19:48:48.049381    9844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:48:48.115896    9844 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001850930}
	I1212 19:48:48.115896    9844 network_create.go:124] attempt to create docker network functional-468800 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 19:48:48.119475    9844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-468800 functional-468800
	I1212 19:48:48.257334    9844 network_create.go:108] docker network functional-468800 192.168.49.0/24 created
	I1212 19:48:48.257431    9844 kic.go:121] calculated static IP "192.168.49.2" for the "functional-468800" container
	I1212 19:48:48.266102    9844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 19:48:48.336982    9844 cli_runner.go:164] Run: docker volume create functional-468800 --label name.minikube.sigs.k8s.io=functional-468800 --label created_by.minikube.sigs.k8s.io=true
	I1212 19:48:48.393984    9844 oci.go:103] Successfully created a docker volume functional-468800
	I1212 19:48:48.397985    9844 cli_runner.go:164] Run: docker run --rm --name functional-468800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-468800 --entrypoint /usr/bin/test -v functional-468800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 19:48:49.764101    9844 cli_runner.go:217] Completed: docker run --rm --name functional-468800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-468800 --entrypoint /usr/bin/test -v functional-468800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.3661033s)
	I1212 19:48:49.764101    9844 oci.go:107] Successfully prepared a docker volume functional-468800
	I1212 19:48:49.764101    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:48:49.764101    9844 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 19:48:49.768598    9844 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-468800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 19:49:04.681485    9844 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-468800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (14.91266s)
	I1212 19:49:04.681583    9844 kic.go:203] duration metric: took 14.9173464s to extract preloaded images to volume ...
	I1212 19:49:04.686546    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:04.914908    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-12 19:49:04.893562637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:49:04.918177    9844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 19:49:05.164327    9844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-468800 --name functional-468800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-468800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-468800 --network functional-468800 --ip 192.168.49.2 --volume functional-468800:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 19:49:05.806560    9844 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Running}}
	I1212 19:49:05.867648    9844 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:49:05.928534    9844 cli_runner.go:164] Run: docker exec functional-468800 stat /var/lib/dpkg/alternatives/iptables
	I1212 19:49:06.044967    9844 oci.go:144] the created container "functional-468800" has a running status.
	I1212 19:49:06.044967    9844 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa...
	I1212 19:49:06.075752    9844 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 19:49:06.163038    9844 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:49:06.228335    9844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 19:49:06.228335    9844 kic_runner.go:114] Args: [docker exec --privileged functional-468800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 19:49:06.347233    9844 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa...
	I1212 19:49:08.459866    9844 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:49:08.515939    9844 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:08.518936    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:08.574140    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:08.588391    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:08.588391    9844 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:08.760799    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:49:08.760799    9844 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:49:08.765162    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:08.824791    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:08.825255    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:08.825284    9844 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:49:09.014085    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:49:09.017570    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:09.073974    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:09.073974    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:09.074498    9844 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:09.241537    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:09.241537    9844 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:49:09.241537    9844 ubuntu.go:190] setting up certificates
	I1212 19:49:09.241537    9844 provision.go:84] configureAuth start
	I1212 19:49:09.245682    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:49:09.298084    9844 provision.go:143] copyHostCerts
	I1212 19:49:09.298084    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:49:09.298084    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:49:09.298637    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:49:09.299159    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:49:09.299159    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:49:09.299159    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:49:09.300338    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:49:09.300338    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:49:09.300620    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:49:09.300836    9844 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:49:09.446731    9844 provision.go:177] copyRemoteCerts
	I1212 19:49:09.450724    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:09.453724    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:09.504722    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:49:09.635581    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:09.663658    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:09.694215    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:49:09.723103    9844 provision.go:87] duration metric: took 481.5617ms to configureAuth
	I1212 19:49:09.723103    9844 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:09.723718    9844 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:09.727272    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:09.780406    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:09.781535    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:09.781535    9844 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:49:09.962831    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:49:09.962831    9844 ubuntu.go:71] root file system type: overlay
	I1212 19:49:09.962831    9844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:49:09.967671    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:10.026994    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:10.027262    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:10.027262    9844 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:49:10.224867    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:49:10.228964    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:10.280826    9844 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:10.281406    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:49:10.281406    9844 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:49:11.734104    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-12 19:49:10.215662862 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 19:49:11.734174    9844 machine.go:97] duration metric: took 3.2182059s to provisionDockerMachine
	I1212 19:49:11.734202    9844 client.go:176] duration metric: took 23.8058743s to LocalClient.Create
	I1212 19:49:11.734221    9844 start.go:167] duration metric: took 23.8060341s to libmachine.API.Create "functional-468800"
	I1212 19:49:11.734221    9844 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:49:11.734221    9844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:11.738466    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:11.742003    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:11.794278    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:49:11.942035    9844 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:11.949361    9844 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:11.949361    9844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:11.949361    9844 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:49:11.950022    9844 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:49:11.950267    9844 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:49:11.951059    9844 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:49:11.954160    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:49:11.968885    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:49:11.999612    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:49:12.027905    9844 start.go:296] duration metric: took 293.6804ms for postStartSetup
	I1212 19:49:12.034251    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:49:12.088795    9844 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:49:12.095832    9844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:12.098421    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:12.153377    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:49:12.284038    9844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:12.295429    9844 start.go:128] duration metric: took 24.3717398s to createHost
	I1212 19:49:12.295429    9844 start.go:83] releasing machines lock for "functional-468800", held for 24.3717398s
	I1212 19:49:12.299050    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:49:12.356683    9844 out.go:179] * Found network options:
	I1212 19:49:12.358611    9844 out.go:179]   - HTTP_PROXY=localhost:55770
	W1212 19:49:12.360884    9844 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1212 19:49:12.364686    9844 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1212 19:49:12.367234    9844 out.go:179]   - HTTP_PROXY=localhost:55770
	I1212 19:49:12.370398    9844 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:49:12.374858    9844 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:12.374858    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:12.377398    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:12.429506    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:49:12.430179    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 19:49:12.559059    9844 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:49:12.564028    9844 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:12.578617    9844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:49:12.586241    9844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:12.591120    9844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:12.640274    9844 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 19:49:12.640274    9844 start.go:496] detecting cgroup driver to use...
	I1212 19:49:12.640378    9844 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:12.640566    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1212 19:49:12.662576    9844 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:49:12.662576    9844 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:49:12.668604    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:12.691059    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:12.704735    9844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:12.708929    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:12.726790    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:12.744305    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:12.763237    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:12.785060    9844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:12.806998    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:12.826154    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:12.846247    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:12.865428    9844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:12.884011    9844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:12.901857    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:13.042651    9844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:13.193360    9844 start.go:496] detecting cgroup driver to use...
	I1212 19:49:13.193360    9844 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:13.198239    9844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:49:13.222386    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:49:13.245721    9844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:49:13.312855    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:49:13.335588    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:13.353924    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:13.381292    9844 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:49:13.392720    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:49:13.407588    9844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:49:13.432490    9844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:49:13.569835    9844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:49:13.715794    9844 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:49:13.715794    9844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:49:13.741410    9844 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:49:13.762987    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:13.900120    9844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:49:14.740601    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:14.768381    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:49:14.794939    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:49:14.817837    9844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:49:14.959930    9844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:49:15.111120    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:15.251972    9844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:49:15.279264    9844 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:49:15.300658    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:15.442728    9844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:49:15.539687    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:49:15.556970    9844 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:49:15.561091    9844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:49:15.569570    9844 start.go:564] Will wait 60s for crictl version
	I1212 19:49:15.574092    9844 ssh_runner.go:195] Run: which crictl
	I1212 19:49:15.585713    9844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:15.623812    9844 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:49:15.626993    9844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:49:15.667068    9844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:49:15.707097    9844 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:49:15.710451    9844 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:49:15.837835    9844 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:49:15.841761    9844 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:49:15.855539    9844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:49:15.876038    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:49:15.935736    9844 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:15.935921    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:49:15.939693    9844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:49:15.978000    9844 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:49:15.978525    9844 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:49:15.982168    9844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:49:16.014170    9844 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:49:16.014170    9844 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:16.014170    9844 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:49:16.014371    9844 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:16.019488    9844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:49:16.089211    9844 cni.go:84] Creating CNI manager for ""
	I1212 19:49:16.089211    9844 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:49:16.089211    9844 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:16.089211    9844 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:16.089732    9844 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:16.093816    9844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:16.106758    9844 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:16.111281    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:16.124249    9844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:49:16.144809    9844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:16.163339    9844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:49:16.189710    9844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:16.196708    9844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:49:16.215041    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:16.350632    9844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:16.371293    9844 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:49:16.371413    9844 certs.go:195] generating shared ca certs ...
	I1212 19:49:16.371413    9844 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.371928    9844 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:49:16.372166    9844 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:49:16.372286    9844 certs.go:257] generating profile certs ...
	I1212 19:49:16.372592    9844 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:49:16.372682    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.crt with IP's: []
	I1212 19:49:16.518194    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.crt ...
	I1212 19:49:16.518194    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.crt: {Name:mk62f4c2d820d31ee1d632ecf73bdc587ad4af89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.519195    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key ...
	I1212 19:49:16.519195    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key: {Name:mkb2d00eb0827b413a0961b084ac98069e85e5b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.520203    9844 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:49:16.520203    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt.a2fee78d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 19:49:16.659417    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt.a2fee78d ...
	I1212 19:49:16.659417    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt.a2fee78d: {Name:mk69d0293e1ad26385f887d7d4b3fb9427da8e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.660417    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d ...
	I1212 19:49:16.660417    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d: {Name:mk679cf68e7151c9788909f05fd0812dccd9bf96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.661418    9844 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt.a2fee78d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt
	I1212 19:49:16.675609    9844 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key
	I1212 19:49:16.676417    9844 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:49:16.676417    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt with IP's: []
	I1212 19:49:16.787541    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt ...
	I1212 19:49:16.787541    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt: {Name:mk9a17eccea9d7b469bc72f7c524b39bd04dd648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.788535    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key ...
	I1212 19:49:16.788535    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key: {Name:mkabbbe424aab39361ee5ec563dfa91f0e0a7df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:16.802116    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:49:16.803126    9844 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:16.803126    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:49:16.803126    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:49:16.803126    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:49:16.803126    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:49:16.803126    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:49:16.804120    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:16.835026    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:49:16.864356    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:16.891887    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:16.919050    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:16.943563    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:49:16.971381    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:17.000760    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:17.028524    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:49:17.059318    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:17.087466    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:49:17.115789    9844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:17.138111    9844 ssh_runner.go:195] Run: openssl version
	I1212 19:49:17.153338    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:49:17.169860    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:49:17.186059    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:49:17.195238    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:49:17.198792    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:49:17.247145    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:17.264104    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:17.282186    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:17.297874    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:17.313825    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:17.320383    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:17.325505    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:17.372754    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:17.390526    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 19:49:17.408049    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:49:17.427167    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:49:17.445595    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:49:17.455577    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:49:17.459952    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:49:17.506832    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:17.523658    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 19:49:17.540253    9844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:17.548813    9844 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 19:49:17.548813    9844 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:17.553149    9844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:49:17.585444    9844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:17.602788    9844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:49:17.614559    9844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:49:17.619357    9844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:49:17.631391    9844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:49:17.631391    9844 kubeadm.go:158] found existing configuration files:
	
	I1212 19:49:17.636429    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:49:17.648191    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:49:17.652423    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:49:17.668850    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:49:17.681922    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:49:17.686190    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:49:17.704332    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:49:17.718028    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:49:17.722018    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:49:17.741286    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:49:17.752992    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:49:17.758591    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:49:17.775403    9844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:49:17.887736    9844 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 19:49:17.970628    9844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 19:49:18.067402    9844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:53:19.710606    9844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 19:53:19.710672    9844 kubeadm.go:319] 
	I1212 19:53:19.710872    9844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 19:53:19.715687    9844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:53:19.715741    9844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:53:19.715741    9844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:53:19.715741    9844 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 19:53:19.715741    9844 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 19:53:19.716266    9844 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 19:53:19.716329    9844 kubeadm.go:319] CONFIG_INET: enabled
	I1212 19:53:19.716859    9844 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 19:53:19.716897    9844 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 19:53:19.716897    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 19:53:19.716897    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 19:53:19.716897    9844 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 19:53:19.716897    9844 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 19:53:19.717484    9844 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 19:53:19.717612    9844 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 19:53:19.717781    9844 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 19:53:19.717944    9844 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 19:53:19.718066    9844 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 19:53:19.718247    9844 kubeadm.go:319] OS: Linux
	I1212 19:53:19.718767    9844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:53:19.718918    9844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:53:19.719039    9844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:53:19.719133    9844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:53:19.719280    9844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:53:19.719342    9844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:53:19.719463    9844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:53:19.719558    9844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:53:19.719680    9844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:53:19.719830    9844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:53:19.720012    9844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:53:19.720199    9844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:53:19.720321    9844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:53:19.723205    9844 out.go:252]   - Generating certificates and keys ...
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 19:53:19.723205    9844 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:53:19.724164    9844 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 19:53:19.724164    9844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:53:19.724164    9844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:53:19.724164    9844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:53:19.725164    9844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:53:19.725164    9844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:53:19.725164    9844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:53:19.725164    9844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:53:19.725164    9844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:53:19.727878    9844 out.go:252]   - Booting up control plane ...
	I1212 19:53:19.728879    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:53:19.728879    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:53:19.728879    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:53:19.728879    9844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:53:19.728879    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:53:19.728879    9844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:53:19.729879    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:53:19.729879    9844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:53:19.729879    9844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:53:19.729879    9844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:53:19.729879    9844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000372478s
	I1212 19:53:19.729879    9844 kubeadm.go:319] 
	I1212 19:53:19.729879    9844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 19:53:19.729879    9844 kubeadm.go:319] 	- The kubelet is not running
	I1212 19:53:19.729879    9844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 19:53:19.729879    9844 kubeadm.go:319] 
	I1212 19:53:19.729879    9844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 19:53:19.729879    9844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 19:53:19.729879    9844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 19:53:19.730879    9844 kubeadm.go:319] 
	W1212 19:53:19.730879    9844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-468800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000372478s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 19:53:19.736207    9844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 19:53:20.198768    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:53:20.217163    9844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:53:20.222119    9844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:53:20.234001    9844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:53:20.234001    9844 kubeadm.go:158] found existing configuration files:
	
	I1212 19:53:20.238906    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:53:20.251765    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:53:20.257438    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:53:20.274955    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:53:20.290740    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:53:20.295557    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:53:20.314551    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:53:20.329475    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:53:20.333640    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:53:20.352703    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:53:20.365877    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:53:20.371914    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:53:20.394243    9844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:53:20.529481    9844 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 19:53:20.615456    9844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 19:53:20.720783    9844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:57:21.613834    9844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 19:57:21.613971    9844 kubeadm.go:319] 
	I1212 19:57:21.614156    9844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 19:57:21.619066    9844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:57:21.619171    9844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:57:21.619171    9844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:57:21.619171    9844 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 19:57:21.619171    9844 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 19:57:21.619171    9844 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 19:57:21.619692    9844 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 19:57:21.619801    9844 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_INET: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 19:57:21.619879    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 19:57:21.620547    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 19:57:21.620640    9844 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 19:57:21.621203    9844 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 19:57:21.621717    9844 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] OS: Linux
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:57:21.621745    9844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:57:21.622331    9844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:57:21.622461    9844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:57:21.622461    9844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:57:21.622461    9844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:57:21.622461    9844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:57:21.622461    9844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:57:21.623116    9844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:57:21.627216    9844 out.go:252]   - Generating certificates and keys ...
	I1212 19:57:21.627216    9844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:57:21.627216    9844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:57:21.627216    9844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:57:21.627833    9844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:57:21.628026    9844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:57:21.628063    9844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:57:21.628063    9844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:57:21.628063    9844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:57:21.628063    9844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:57:21.628651    9844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:57:21.628651    9844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:57:21.628651    9844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:57:21.628651    9844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:57:21.628651    9844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:57:21.628651    9844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:57:21.629263    9844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:57:21.629263    9844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:57:21.629263    9844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:57:21.629263    9844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:57:21.632108    9844 out.go:252]   - Booting up control plane ...
	I1212 19:57:21.632108    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:57:21.632108    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:57:21.632108    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:57:21.632108    9844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:57:21.632819    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:57:21.632819    9844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:57:21.632819    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:57:21.632819    9844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:57:21.632819    9844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:57:21.632819    9844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:57:21.632819    9844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000203468s
	I1212 19:57:21.632819    9844 kubeadm.go:319] 
	I1212 19:57:21.632819    9844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 19:57:21.632819    9844 kubeadm.go:319] 	- The kubelet is not running
	I1212 19:57:21.633863    9844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 19:57:21.633863    9844 kubeadm.go:319] 
	I1212 19:57:21.633863    9844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 19:57:21.633863    9844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 19:57:21.633863    9844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 19:57:21.633863    9844 kubeadm.go:319] 
	I1212 19:57:21.633863    9844 kubeadm.go:403] duration metric: took 8m4.0803405s to StartCluster
	I1212 19:57:21.633863    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:21.637857    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:21.695209    9844 cri.go:89] found id: ""
	I1212 19:57:21.695209    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.695209    9844 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:21.695209    9844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 19:57:21.699862    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:21.737834    9844 cri.go:89] found id: ""
	I1212 19:57:21.737894    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.737894    9844 logs.go:284] No container was found matching "etcd"
	I1212 19:57:21.737919    9844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 19:57:21.742566    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:21.793047    9844 cri.go:89] found id: ""
	I1212 19:57:21.793047    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.793047    9844 logs.go:284] No container was found matching "coredns"
	I1212 19:57:21.793047    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:21.798463    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:21.836963    9844 cri.go:89] found id: ""
	I1212 19:57:21.836997    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.836997    9844 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:21.836997    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:21.840837    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:21.884191    9844 cri.go:89] found id: ""
	I1212 19:57:21.884217    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.884240    9844 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:21.884240    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:21.888531    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:21.932240    9844 cri.go:89] found id: ""
	I1212 19:57:21.932285    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.932345    9844 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:21.932345    9844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:21.938732    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:21.978208    9844 cri.go:89] found id: ""
	I1212 19:57:21.978208    9844 logs.go:282] 0 containers: []
	W1212 19:57:21.978208    9844 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:21.978208    9844 logs.go:123] Gathering logs for Docker ...
	I1212 19:57:21.978208    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 19:57:22.007832    9844 logs.go:123] Gathering logs for container status ...
	I1212 19:57:22.007832    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:22.052546    9844 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:22.052546    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:22.116536    9844 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:22.116536    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:22.145129    9844 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:22.145184    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:22.227752    9844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:22.217809    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.219019    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.219861    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.221901    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.222662    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:22.217809    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.219019    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.219861    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.221901    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:22.222662    9803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 19:57:22.227752    9844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203468s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 19:57:22.227752    9844 out.go:285] * 
	W1212 19:57:22.227752    9844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203468s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 19:57:22.227752    9844 out.go:285] * 
	W1212 19:57:22.229964    9844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:57:22.238174    9844 out.go:203] 
	W1212 19:57:22.242278    9844 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203468s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 19:57:22.242662    9844 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 19:57:22.242662    9844 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 19:57:22.246368    9844 out.go:203] 
	
	
	==> Docker <==
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620366979Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620445986Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620458487Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620465388Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620471988Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620493790Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.620581198Z" level=info msg="Initializing buildkit"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.725161141Z" level=info msg="Completed buildkit initialization"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.732829426Z" level=info msg="Daemon has completed initialization"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.732953937Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.733033444Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 19:49:14 functional-468800 dockerd[1200]: time="2025-12-12T19:49:14.733032244Z" level=info msg="API listen on [::]:2376"
	Dec 12 19:49:14 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 19:49:15 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Loaded network plugin cni"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 19:49:15 functional-468800 cri-dockerd[1493]: time="2025-12-12T19:49:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 19:49:15 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:23.902462    9945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:23.903545    9945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:23.905785    9945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:23.907767    9945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:23.909226    9945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000873] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000908] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000822] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000796] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000923] FS:  0000000000000000 GS:  0000000000000000
	[  +6.594013] CPU: 0 PID: 44508 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000813] RIP: 0033:0x7fee23349b20
	[  +0.000380] Code: Unable to access opcode bytes at RIP 0x7fee23349af6.
	[  +0.000647] RSP: 002b:00007ffe765e9a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000809] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000796] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000773] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000778] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000786] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[  +0.823325] CPU: 10 PID: 44621 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000815] RIP: 0033:0x7f6a9ddcdb20
	[  +0.000406] Code: Unable to access opcode bytes at RIP 0x7f6a9ddcdaf6.
	[  +0.000783] RSP: 002b:00007ffef013b360 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000853] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000814] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000769] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000773] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 19:57:23 up 59 min,  0 user,  load average: 0.22, 0.41, 0.76
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:57:20 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:21 functional-468800 kubelet[9676]: E1212 19:57:21.061560    9676 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:57:21 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:57:21 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:57:21 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 19:57:21 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:21 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:21 functional-468800 kubelet[9711]: E1212 19:57:21.807046    9711 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:57:21 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:57:21 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:57:22 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 19:57:22 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:22 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:22 functional-468800 kubelet[9812]: E1212 19:57:22.568822    9812 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:57:22 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:57:22 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:57:23 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 19:57:23 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:23 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:23 functional-468800 kubelet[9841]: E1212 19:57:23.334052    9841 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:57:23 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:57:23 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:57:23 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 19:57:23 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:57:23 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 6 (571.6886ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 19:57:24.838748    5352 status.go:458] kubeconfig endpoint: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (517.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (373.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 19:57:24.882341   13396 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --alsologtostderr -v=8
E1212 19:57:31.849804   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:57:59.559753   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:59:54.768255   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:02:31.853336   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-468800 --alsologtostderr -v=8: exit status 80 (6m9.7109464s)

                                                
                                                
-- stdout --
	* [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:57:24.956785    8792 out.go:360] Setting OutFile to fd 1808 ...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:24.998786    8792 out.go:374] Setting ErrFile to fd 1700...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:25.011786    8792 out.go:368] Setting JSON to false
	I1212 19:57:25.013782    8792 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3583,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:57:25.013782    8792 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:57:25.016780    8792 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:57:25.020780    8792 notify.go:221] Checking for updates...
	I1212 19:57:25.022780    8792 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:25.024782    8792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:25.027780    8792 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:57:25.030779    8792 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:57:25.034782    8792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:25.037790    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:25.037790    8792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:57:25.155476    8792 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:57:25.159985    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.387868    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.372369133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.391884    8792 out.go:179] * Using the docker driver based on existing profile
	I1212 19:57:25.396868    8792 start.go:309] selected driver: docker
	I1212 19:57:25.396868    8792 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.396868    8792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:25.402871    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.622678    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.606400505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.701623    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:25.701623    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:25.701623    8792 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.706631    8792 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:57:25.708636    8792 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:57:25.711883    8792 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:57:25.714043    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:25.714043    8792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:57:25.714043    8792 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:57:25.714043    8792 cache.go:65] Caching tarball of preloaded images
	I1212 19:57:25.714043    8792 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:25.714043    8792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:57:25.714043    8792 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:57:25.792275    8792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:57:25.792275    8792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:57:25.792275    8792 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:57:25.792275    8792 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:25.792275    8792 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 19:57:25.792275    8792 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:57:25.792275    8792 fix.go:54] fixHost starting: 
	I1212 19:57:25.799955    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:25.853025    8792 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 19:57:25.853025    8792 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:57:25.856025    8792 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 19:57:25.856025    8792 machine.go:94] provisionDockerMachine start ...
	I1212 19:57:25.859025    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:25.918375    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:25.918479    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:25.918479    8792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:57:26.103358    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.103411    8792 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:57:26.107534    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.162431    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.162900    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.163030    8792 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:57:26.366993    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.370927    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.421027    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.422025    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.422025    8792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:26.592472    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:26.592472    8792 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:57:26.592472    8792 ubuntu.go:190] setting up certificates
	I1212 19:57:26.592472    8792 provision.go:84] configureAuth start
	I1212 19:57:26.596494    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:26.648327    8792 provision.go:143] copyHostCerts
	I1212 19:57:26.648492    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:57:26.648569    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:57:26.649807    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:57:26.649946    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:57:26.650879    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:57:26.650879    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:57:26.651440    8792 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:57:26.782013    8792 provision.go:177] copyRemoteCerts
	I1212 19:57:26.785479    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:26.788240    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.842524    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:26.968619    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 19:57:26.968964    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:57:26.995759    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 19:57:26.995759    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:57:27.024847    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 19:57:27.024847    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:57:27.057221    8792 provision.go:87] duration metric: took 464.7444ms to configureAuth
	I1212 19:57:27.057221    8792 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:57:27.057221    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:27.061251    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.121889    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.122548    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.122604    8792 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:57:27.313910    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:57:27.313910    8792 ubuntu.go:71] root file system type: overlay
	I1212 19:57:27.313910    8792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:57:27.317488    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.376486    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.377052    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.377052    8792 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:57:27.577536    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:57:27.581688    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.635455    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.635931    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.635954    8792 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:57:27.828516    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:27.828574    8792 machine.go:97] duration metric: took 1.9725293s to provisionDockerMachine
	I1212 19:57:27.828619    8792 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:57:27.828619    8792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:27.833127    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:27.836440    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.891552    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.022421    8792 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:28.031829    8792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_ID="12"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:57:28.031829    8792 command_runner.go:130] > ID=debian
	I1212 19:57:28.031829    8792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:57:28.031829    8792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:57:28.031829    8792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:57:28.031829    8792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:57:28.031829    8792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:57:28.031829    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:57:28.032546    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:57:28.033148    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:57:28.033204    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /etc/ssl/certs/133962.pem
	I1212 19:57:28.033277    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:57:28.033277    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> /etc/test/nested/copy/13396/hosts
	I1212 19:57:28.037935    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:57:28.050821    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:57:28.081156    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:57:28.109846    8792 start.go:296] duration metric: took 281.2243ms for postStartSetup
	I1212 19:57:28.115818    8792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:28.118674    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.171853    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.302700    8792 command_runner.go:130] > 1%
	I1212 19:57:28.308193    8792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:57:28.316146    8792 command_runner.go:130] > 950G
	I1212 19:57:28.316204    8792 fix.go:56] duration metric: took 2.5239035s for fixHost
	I1212 19:57:28.316204    8792 start.go:83] releasing machines lock for "functional-468800", held for 2.5239035s
	I1212 19:57:28.320187    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:28.373764    8792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:57:28.378728    8792 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:28.378728    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.382043    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.432252    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.433503    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.550849    8792 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1212 19:57:28.550961    8792 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:57:28.550961    8792 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:57:28.556187    8792 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:28.565686    8792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:57:28.565686    8792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:57:28.570074    8792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:57:28.577782    8792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:57:28.578775    8792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:28.583114    8792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:28.595283    8792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:57:28.595283    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:28.595283    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:28.595283    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:28.617880    8792 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:57:28.622700    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:57:28.640953    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:57:28.655059    8792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:57:28.659503    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 19:57:28.659726    8792 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:57:28.659726    8792 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:57:28.678759    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.696413    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:57:28.715842    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.736528    8792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:28.755951    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:57:28.776240    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:57:28.795721    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:57:28.815051    8792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:28.829778    8792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:57:28.834204    8792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:28.852899    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:28.995620    8792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:57:29.167559    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:29.167559    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:29.172911    8792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Unit]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 19:57:29.191693    8792 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 19:57:29.191693    8792 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1212 19:57:29.191693    8792 command_runner.go:130] > Wants=network-online.target containerd.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > Requires=docker.socket
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitBurst=3
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Service]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Type=notify
	I1212 19:57:29.191693    8792 command_runner.go:130] > Restart=always
	I1212 19:57:29.191693    8792 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 19:57:29.191693    8792 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 19:57:29.191693    8792 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 19:57:29.191693    8792 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 19:57:29.191693    8792 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 19:57:29.191693    8792 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 19:57:29.191693    8792 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 19:57:29.191693    8792 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNOFILE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNPROC=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitCORE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 19:57:29.191693    8792 command_runner.go:130] > TasksMax=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > TimeoutStartSec=0
	I1212 19:57:29.191693    8792 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 19:57:29.191693    8792 command_runner.go:130] > Delegate=yes
	I1212 19:57:29.191693    8792 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 19:57:29.191693    8792 command_runner.go:130] > KillMode=process
	I1212 19:57:29.191693    8792 command_runner.go:130] > OOMScoreAdjust=-500
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Install]
	I1212 19:57:29.191693    8792 command_runner.go:130] > WantedBy=multi-user.target
	I1212 19:57:29.196788    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.221924    8792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:29.312337    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.337554    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:57:29.357559    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:29.379522    8792 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 19:57:29.384213    8792 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:57:29.390808    8792 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 19:57:29.396438    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:57:29.409074    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:57:29.434191    8792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:57:29.578871    8792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:57:29.719341    8792 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:57:29.719341    8792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:57:29.746173    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:57:29.768870    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:29.905737    8792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:57:30.757640    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:30.780953    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:57:30.802218    8792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 19:57:30.829184    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:30.853409    8792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:57:30.994012    8792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:57:31.134627    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.283484    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:57:31.309618    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:57:31.333897    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.475108    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:57:31.578219    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:31.597007    8792 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:57:31.600988    8792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:57:31.610316    8792 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 19:57:31.611281    8792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:57:31.611281    8792 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Modify: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Change: 2025-12-12 19:57:31.484639595 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] >  Birth: -
	I1212 19:57:31.611281    8792 start.go:564] Will wait 60s for crictl version
	I1212 19:57:31.615844    8792 ssh_runner.go:195] Run: which crictl
	I1212 19:57:31.621876    8792 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:57:31.626999    8792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:57:31.672687    8792 command_runner.go:130] > Version:  0.1.0
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeName:  docker
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:57:31.672790    8792 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:57:31.676132    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.713311    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.716489    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.755737    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.761482    8792 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:57:31.765357    8792 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:57:31.901903    8792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:57:31.906530    8792 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:57:31.913687    8792 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1212 19:57:31.917320    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:31.973317    8792 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:57:31.973590    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:31.977450    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.013673    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.013673    8792 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:57:32.017349    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.047537    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.047537    8792 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:57:32.047537    8792 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:57:32.048190    8792 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:57:32.051146    8792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:57:32.121447    8792 command_runner.go:130] > cgroupfs
	I1212 19:57:32.121447    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:32.121447    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:32.121447    8792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:32.121964    8792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:32.122106    8792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:32.126035    8792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:57:32.138764    8792 command_runner.go:130] > kubeadm
	I1212 19:57:32.138798    8792 command_runner.go:130] > kubectl
	I1212 19:57:32.138825    8792 command_runner.go:130] > kubelet
	I1212 19:57:32.138845    8792 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:57:32.143533    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:32.155602    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:57:32.179900    8792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:57:32.199342    8792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:57:32.222871    8792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:32.229151    8792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:57:32.234589    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:32.373967    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:32.974236    8792 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:57:32.974236    8792 certs.go:195] generating shared ca certs ...
	I1212 19:57:32.974236    8792 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:57:32.975214    8792 certs.go:257] generating profile certs ...
	I1212 19:57:32.976191    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:57:32.976561    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:57:32.976892    8792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:57:32.977527    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:57:32.977863    8792 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:57:32.978401    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:57:32.978646    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:57:32.979304    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem -> /usr/share/ca-certificates/13396.pem
	I1212 19:57:32.979449    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /usr/share/ca-certificates/133962.pem
	I1212 19:57:32.979529    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:32.980729    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:33.008686    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:57:33.035660    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:33.063247    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:33.108547    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:57:33.138500    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:57:33.165883    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:33.195246    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:57:33.221022    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:57:33.248791    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:57:33.274438    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:33.302337    8792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:33.324312    8792 ssh_runner.go:195] Run: openssl version
	I1212 19:57:33.335263    8792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:57:33.339948    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.356389    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:57:33.375441    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.387660    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.430281    8792 command_runner.go:130] > 51391683
	I1212 19:57:33.435287    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:57:33.452481    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.471523    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:57:33.489874    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.502698    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.544550    8792 command_runner.go:130] > 3ec20f2e
	I1212 19:57:33.549548    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:57:33.566747    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.583990    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:57:33.600438    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.614484    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.657826    8792 command_runner.go:130] > b5213941
	I1212 19:57:33.662138    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:57:33.678498    8792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:57:33.685111    8792 command_runner.go:130] > Device: 8,48	Inode: 15292       Links: 1
	I1212 19:57:33.685111    8792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:57:33.685797    8792 command_runner.go:130] > Access: 2025-12-12 19:53:20.728281925 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Modify: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Change: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] >  Birth: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.689949    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:57:33.733144    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.737823    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:57:33.780151    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.785054    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:57:33.827773    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.833292    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:57:33.875401    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.880293    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:57:33.922924    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.927940    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:57:33.970239    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.970239    8792 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:33.976672    8792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:57:34.008252    8792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:34.020977    8792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:57:34.021108    8792 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:57:34.021108    8792 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:57:34.025234    8792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:57:34.045139    8792 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:34.049590    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.107138    8792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.107889    8792 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-468800" cluster setting kubeconfig missing "functional-468800" context setting]
	I1212 19:57:34.107889    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.126355    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.126843    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.128169    8792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:57:34.128230    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.128230    8792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:57:34.132435    8792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:57:34.149951    8792 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 19:57:34.150008    8792 kubeadm.go:602] duration metric: took 128.8994ms to restartPrimaryControlPlane
	I1212 19:57:34.150032    8792 kubeadm.go:403] duration metric: took 179.7913ms to StartCluster
	I1212 19:57:34.150032    8792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.150032    8792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.151180    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.152111    8792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:57:34.152111    8792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:57:34.152386    8792 addons.go:70] Setting storage-provisioner=true in profile "functional-468800"
	I1212 19:57:34.152386    8792 addons.go:70] Setting default-storageclass=true in profile "functional-468800"
	I1212 19:57:34.152426    8792 addons.go:239] Setting addon storage-provisioner=true in "functional-468800"
	I1212 19:57:34.152475    8792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-468800"
	I1212 19:57:34.152564    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.152599    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:34.155555    8792 out.go:179] * Verifying Kubernetes components...
	I1212 19:57:34.161161    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.161613    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.163072    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:34.221534    8792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:34.221534    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.221534    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.222943    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.223481    8792 addons.go:239] Setting addon default-storageclass=true in "functional-468800"
	I1212 19:57:34.223558    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.223558    8792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.223558    8792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:57:34.227691    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.230256    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.287093    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.289848    8792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.289848    8792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:57:34.293811    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.345554    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:34.348560    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.426758    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.480013    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.480104    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.534162    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.538400    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538479    8792 retry.go:31] will retry after 344.600735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538532    8792 node_ready.go:35] waiting up to 6m0s for node "functional-468800" to be "Ready" ...
	I1212 19:57:34.539394    8792 type.go:168] "Request Body" body=""
	I1212 19:57:34.539597    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:34.541949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:34.608531    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.613599    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.613599    8792 retry.go:31] will retry after 216.683996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.835959    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.887701    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.908576    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.913475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.913475    8792 retry.go:31] will retry after 230.473341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.961197    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.966061    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.966061    8792 retry.go:31] will retry after 349.771822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.150121    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.221040    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.228247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.228333    8792 retry.go:31] will retry after 512.778483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.321063    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.394131    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.397148    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.397148    8792 retry.go:31] will retry after 487.352123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.542707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:35.542707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:35.545160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:35.747496    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.819613    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.822659    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.822659    8792 retry.go:31] will retry after 1.154413243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.890743    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.965246    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.972460    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.972460    8792 retry.go:31] will retry after 1.245938436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:36.545730    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:36.545730    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:36.549771    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:36.983387    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:37.090901    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.094847    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.094847    8792 retry.go:31] will retry after 1.548342934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.223991    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:37.295689    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.299705    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.299769    8792 retry.go:31] will retry after 1.579528606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.551013    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:37.551013    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:37.554154    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:38.554939    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:38.555432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:38.558234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:38.649390    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:38.725500    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.729499    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.729499    8792 retry.go:31] will retry after 2.648471583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.884600    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:38.953302    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.958318    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.958318    8792 retry.go:31] will retry after 2.058418403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:39.559077    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:39.559356    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:39.562225    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:40.562954    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:40.563393    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:40.566347    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:41.022091    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:41.102318    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.106247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.106247    8792 retry.go:31] will retry after 3.080320353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.384408    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:41.470520    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.473795    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.473795    8792 retry.go:31] will retry after 2.343057986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.566604    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:41.566604    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:41.569639    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:42.569950    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:42.569950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:42.573153    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:43.573545    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:43.573545    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:43.577655    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:43.821674    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:43.897847    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:43.901846    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:43.901846    8792 retry.go:31] will retry after 5.566518346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.193277    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:44.263403    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:44.269459    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.269459    8792 retry.go:31] will retry after 4.550082482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.577835    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:44.577835    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.580876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:44.581034    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:44.581158    8792 type.go:168] "Request Body" body=""
	I1212 19:57:44.581244    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.583508    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:45.583961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:45.583961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:45.587161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:46.587855    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:46.588199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:46.590728    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:47.591504    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:47.591504    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:47.594168    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:48.595392    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:48.595392    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:48.601208    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:57:48.824534    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:48.903714    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:48.909283    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:48.909283    8792 retry.go:31] will retry after 5.408295828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.475338    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:49.554836    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:49.559515    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.559515    8792 retry.go:31] will retry after 7.920709676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.602224    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:49.602480    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:49.605147    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:50.605575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:50.605575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:50.609094    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:51.610210    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:51.610210    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:51.613279    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:52.613438    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:52.613438    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:52.617857    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:53.618444    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:53.618444    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:53.622009    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:54.323567    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:54.399774    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:54.402767    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.402767    8792 retry.go:31] will retry after 5.650885129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.622233    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:54.622233    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.625806    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:54.625833    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:54.625833    8792 type.go:168] "Request Body" body=""
	I1212 19:57:54.625833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.628220    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:55.628567    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:55.628567    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:55.632067    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:56.632335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:56.632737    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:56.635417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:57.485659    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:57.566715    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:57.570725    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.570725    8792 retry.go:31] will retry after 5.889801353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.635601    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:57.636162    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:57.638437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:58.639201    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:58.639201    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:58.641202    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:59.642751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:59.642751    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:59.645820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:00.059077    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:00.141196    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:00.144743    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.144828    8792 retry.go:31] will retry after 12.880427161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.646278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:00.646278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:00.648514    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:01.648554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:01.648554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:01.652477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:02.652719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:02.652719    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:02.656865    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:03.466574    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:03.546687    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:03.552160    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.552160    8792 retry.go:31] will retry after 8.684375444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.657068    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:03.657068    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:03.660376    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:04.660836    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:04.661165    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.664417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:04.664489    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:04.664634    8792 type.go:168] "Request Body" body=""
	I1212 19:58:04.664723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.667029    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:05.667419    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:05.667419    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:05.670032    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:06.670984    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:06.670984    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:06.674354    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:07.675175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:07.675473    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:07.678161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:08.679000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:08.679000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:08.682498    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:09.683536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:09.684039    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:09.686703    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:10.687176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:10.687514    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:10.691708    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:11.692097    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:11.692097    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:11.695419    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:12.243184    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.329214    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:12.335592    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.335592    8792 retry.go:31] will retry after 19.078221738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.695735    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:12.695735    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:12.698564    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:13.030727    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:13.107677    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:13.111475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.111475    8792 retry.go:31] will retry after 24.078034123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.699329    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:13.699329    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:13.703201    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:14.703632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:14.703632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.706632    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:14.706632    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:14.706632    8792 type.go:168] "Request Body" body=""
	I1212 19:58:14.706632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.709461    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:15.709987    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:15.709987    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:15.713881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:16.714426    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:16.714947    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:16.717509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:17.718027    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:17.718027    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:17.721452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:18.721719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:18.722180    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:18.725521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:19.726174    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:19.726174    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:19.731274    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:20.731838    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:20.731838    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:20.735774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:21.736083    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:21.736083    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:21.739364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:22.740462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:22.740462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:22.743494    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:23.744218    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:23.744882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:23.747961    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:24.748401    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:24.748401    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.752939    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:58:24.752939    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:24.752939    8792 type.go:168] "Request Body" body=""
	I1212 19:58:24.752939    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.756295    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:25.756593    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:25.756959    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:25.759330    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:26.760825    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:26.760825    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:26.765414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:27.765653    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:27.765653    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:27.769152    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:28.770176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:28.770595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:28.774341    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:29.774498    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:29.774498    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:29.777488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:30.778437    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:30.778437    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:30.781414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:31.419403    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:31.498102    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:31.502554    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.502554    8792 retry.go:31] will retry after 21.655222228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.781482    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:31.781482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:31.783476    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:32.785130    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:32.785130    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:32.787452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:33.788547    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:33.788547    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:33.791489    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:34.792428    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:34.792428    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.794457    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:34.794457    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:34.794457    8792 type.go:168] "Request Body" body=""
	I1212 19:58:34.794457    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.796423    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:35.796926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:35.796926    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:35.800403    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:36.800694    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:36.800694    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:36.803902    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:37.195194    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:37.275035    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:37.278655    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.278655    8792 retry.go:31] will retry after 33.639329095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.804194    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:37.804194    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:37.807496    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:38.808801    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:38.808801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:38.811801    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:39.812262    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:39.812262    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:39.815469    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:40.816141    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:40.816141    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:40.819310    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:41.819973    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:41.819973    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:41.823039    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:42.824053    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:42.824053    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:42.827675    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:43.828345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:43.828345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:43.830350    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:44.830883    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:44.830883    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.834425    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:44.834502    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:44.834607    8792 type.go:168] "Request Body" body=""
	I1212 19:58:44.834703    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.836790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:45.837202    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:45.837202    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:45.840615    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:46.840700    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:46.840700    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:46.843992    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:47.844334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:47.844334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:47.847669    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:48.848509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:48.848509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:48.851509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:49.852471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:49.852471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:49.855417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:50.855889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:50.855889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:50.858888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:51.859324    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:51.859324    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:51.862752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:52.863764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:52.863764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:52.867051    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:53.163493    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:53.239799    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245721    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245920    8792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:58:53.867924    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:53.867924    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:53.871211    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:54.872502    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:54.872502    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.875103    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:54.875103    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:54.875635    8792 type.go:168] "Request Body" body=""
	I1212 19:58:54.875635    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.878074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:55.878391    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:55.878391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:55.881700    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:56.882314    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:56.882731    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:56.885332    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:57.886661    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:57.886661    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:57.890321    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:58.891069    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:58.891069    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:58.894045    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:59.894455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:59.894455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:59.897144    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:00.897724    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:00.897724    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:00.900925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:01.901327    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:01.901327    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:01.904820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:02.905377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:02.905668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:02.908844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:03.909357    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:03.909357    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:03.912567    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:04.913190    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:04.913190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.916248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:04.916248    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:04.916248    8792 type.go:168] "Request Body" body=""
	I1212 19:59:04.916248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.918608    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:05.918787    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:05.919084    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:05.921580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:06.921873    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:06.921873    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:06.925988    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:07.927045    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:07.927045    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:07.930359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:08.930575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:08.930575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:08.934014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:09.935175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:09.935175    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:09.939760    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:10.923536    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:59:10.940298    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:10.940298    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:10.942578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:11.011286    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:59:11.015418    8792 out.go:179] * Enabled addons: 
	I1212 19:59:11.018366    8792 addons.go:530] duration metric: took 1m36.8652549s for enable addons: enabled=[]
	I1212 19:59:11.943695    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:11.943695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:11.946524    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:12.947004    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:12.947004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:12.950107    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:13.950403    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:13.950403    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:13.953492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:14.953762    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:14.953762    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.957001    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:14.957153    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:14.957292    8792 type.go:168] "Request Body" body=""
	I1212 19:59:14.957344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.959399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:15.959732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:15.959732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:15.963481    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:16.964631    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:16.964631    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:16.967431    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:17.968335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:17.968716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:17.971422    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:18.975421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:18.975482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:18.981353    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:19.982483    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:19.982483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:19.986458    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:20.986878    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:20.986878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:20.990580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:21.991705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:21.991705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:21.994313    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:22.994828    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:22.994828    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:22.998384    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:23.999291    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:23.999572    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:24.001757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:25.002197    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:25.002197    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.006076    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:25.006076    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:25.006076    8792 type.go:168] "Request Body" body=""
	I1212 19:59:25.006076    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.008833    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:26.009236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:26.009483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:26.013280    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:27.013991    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:27.013991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:27.017339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:28.017861    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:28.017861    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:28.020302    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:29.021278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:29.021278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:29.024910    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:30.025134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:30.025134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:30.028490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:31.029228    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:31.029228    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:31.032192    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:32.033358    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:32.033358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:32.037022    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:33.037052    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:33.037052    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:33.039997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:34.040974    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:34.040974    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:34.044336    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:35.045158    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:35.045158    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.050424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:35.050478    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:35.050634    8792 type.go:168] "Request Body" body=""
	I1212 19:59:35.050710    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.053272    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:36.053659    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:36.053659    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:36.056921    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:37.057862    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:37.057983    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:37.061055    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:38.061705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:38.061705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:38.064401    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:39.065070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:39.065070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:39.070212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:40.070745    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:40.070745    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:40.074056    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:41.074238    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:41.074238    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:41.077817    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:42.078786    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:42.078786    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:42.082102    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:43.082439    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:43.082849    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:43.086074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:44.086257    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:44.086257    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:44.089158    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:45.089746    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:45.089746    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.093004    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:45.093004    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:45.093004    8792 type.go:168] "Request Body" body=""
	I1212 19:59:45.093004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.096683    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:46.097116    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:46.097615    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:46.100214    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:47.101361    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:47.101361    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:47.104657    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:48.104994    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:48.104994    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:48.108049    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:49.109535    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:49.109535    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:49.112664    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:50.113614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:50.113614    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:50.117411    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:51.117709    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:51.117709    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:51.121291    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:52.121914    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:52.122224    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:52.125068    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:53.125697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:53.126105    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:53.129084    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:54.129467    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:54.129467    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:54.133149    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:55.133722    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:55.133722    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.139098    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:55.139630    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:55.139774    8792 type.go:168] "Request Body" body=""
	I1212 19:59:55.139830    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.142212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:56.142471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:56.142471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:56.145561    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:57.146754    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:57.146754    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:57.150691    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:58.151315    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:58.151315    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:58.153802    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:59.154632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:59.154632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:59.157895    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:00.158286    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:00.158286    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:00.161521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:01.161851    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:01.161851    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:01.165478    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:02.166140    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:02.166140    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:02.169015    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:03.169549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:03.169549    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:03.179028    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	I1212 20:00:04.179254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:04.179632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:04.182303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:05.183057    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:05.183057    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.186169    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:05.186202    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:05.186368    8792 type.go:168] "Request Body" body=""
	I1212 20:00:05.186427    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.188490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:06.189369    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:06.189369    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:06.191767    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:07.192287    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:07.192287    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:07.195873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:08.196564    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:08.196564    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:08.200301    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:09.200652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:09.201050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:09.203873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:10.204621    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:10.204621    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:10.207991    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:11.208169    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:11.208695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:11.211546    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:12.212265    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:12.212265    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:12.215652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:13.216481    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:13.216481    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:13.218808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:14.219114    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:14.219114    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:14.222371    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:15.223587    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:15.223882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.226696    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:15.226696    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:15.226696    8792 type.go:168] "Request Body" body=""
	I1212 20:00:15.227288    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.230014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:16.230255    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:16.230702    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:16.234073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:17.234537    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:17.234537    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:17.238981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:18.240162    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:18.240450    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:18.242671    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:19.244029    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:19.244029    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:19.247551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:20.248288    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:20.248689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:20.251486    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:21.252448    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:21.252448    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:21.255871    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:22.256129    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:22.256129    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:22.259292    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:23.259853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:23.260152    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:23.263166    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:24.264181    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:24.264523    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:24.267309    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:25.267655    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:25.267655    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.270583    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:25.270681    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:25.270716    8792 type.go:168] "Request Body" body=""
	I1212 20:00:25.270716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.272780    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:26.273236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:26.273236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:26.276531    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:27.277612    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:27.277612    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:27.280399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:28.280976    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:28.281348    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:28.284050    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:29.284889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:29.284889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:29.288318    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:30.289605    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:30.289605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:30.292210    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:31.292623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:31.292623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:31.296173    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:32.297272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:32.297272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:32.300365    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:33.300747    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:33.300747    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:33.304627    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:34.305148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:34.305148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:34.307286    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:35.308221    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:35.308221    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.311525    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:35.311525    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:35.311525    8792 type.go:168] "Request Body" body=""
	I1212 20:00:35.311525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.314768    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:36.315303    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:36.315803    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:36.319885    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:37.320651    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:37.320651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:37.323804    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:38.324633    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:38.324633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:38.327596    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:39.328167    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:39.328827    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:39.332387    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:40.335388    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:40.335388    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:40.341222    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:00:41.342293    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:41.342293    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:41.346503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:42.346733    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:42.347391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:42.349901    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:43.350351    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:43.350351    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:43.353790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:44.354356    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:44.354951    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:44.357421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:45.357936    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:45.358254    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.361424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:45.361488    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:45.361558    8792 type.go:168] "Request Body" body=""
	I1212 20:00:45.361734    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.364678    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:46.364915    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:46.364915    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:46.368243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:47.368380    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:47.368380    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:47.371842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:48.372123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:48.372496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:48.375782    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:49.376328    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:49.376328    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:49.379339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:50.379689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:50.380090    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:50.383968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:51.384253    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:51.384253    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:51.387625    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:52.388421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:52.388421    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:52.391331    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:53.392103    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:53.392524    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:53.395936    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:54.396522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:54.396914    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:54.399312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:55.399853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:55.399853    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.404011    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:00:55.404054    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:55.404190    8792 type.go:168] "Request Body" body=""
	I1212 20:00:55.404190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.406466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:56.406717    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:56.406717    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:56.409652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:57.409829    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:57.409829    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:57.413808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:58.414272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:58.414272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:58.416891    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:59.418094    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:59.418094    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:59.422379    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:00.422928    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:00.423211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:00.425511    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:01.426949    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:01.427372    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:01.429940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:02.430697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:02.430894    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:02.434142    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:03.434554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:03.434554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:03.438125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:04.438646    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:04.438646    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:04.441873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:05.442580    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:05.443007    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.445227    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:05.445288    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:05.445349    8792 type.go:168] "Request Body" body=""
	I1212 20:01:05.445349    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.447160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 20:01:06.448042    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:06.448299    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:06.451364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:07.451519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:07.451519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:07.454072    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:08.455225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:08.455581    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:08.458949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:09.459239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:09.459483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:09.462124    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:10.462488    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:10.462488    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:10.465073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:11.466146    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:11.466334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:11.468858    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:12.469556    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:12.469556    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:12.472263    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:13.473070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:13.473070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:13.476554    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:14.476996    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:14.477386    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:14.479751    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:15.480652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:15.480652    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.484243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:15.484268    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:15.484379    8792 type.go:168] "Request Body" body=""
	I1212 20:01:15.484379    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.486997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:16.487837    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:16.487837    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:16.491073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:17.491865    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:17.492218    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:17.495307    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:18.495909    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:18.495909    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:18.499046    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:19.499542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:19.499542    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:19.502844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:20.503664    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:20.503664    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:20.506838    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:21.507123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:21.507496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:21.510126    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:22.510522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:22.510522    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:22.513442    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:23.514259    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:23.514259    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:23.516261    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:24.517279    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:24.517279    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:24.520541    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:25.521455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:25.521455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.524551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:25.524625    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:25.524657    8792 type.go:168] "Request Body" body=""
	I1212 20:01:25.524657    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.527752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:26.528360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:26.528723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:26.532917    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:27.533242    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:27.533242    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:27.537366    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:28.538106    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:28.538495    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:28.543549    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:01:29.544680    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:29.544680    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:29.548232    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:30.548450    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:30.548850    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:30.552101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:31.552352    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:31.552352    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:31.556248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:32.556689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:32.556689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:32.560889    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:33.561227    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:33.561227    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:33.565100    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:34.566919    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:34.566919    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:34.573248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1212 20:01:35.574024    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:35.574411    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.577335    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:35.577335    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:35.577335    8792 type.go:168] "Request Body" body=""
	I1212 20:01:35.577335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.579846    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:36.580067    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:36.580067    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:36.582937    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:37.583614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:37.584133    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:37.588041    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:38.588334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:38.588334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:38.590836    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:39.591771    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:39.592199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:39.596300    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:40.596570    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:40.596570    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:40.599738    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:41.600585    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:41.600964    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:41.603618    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:42.604326    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:42.604326    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:42.607888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:43.608118    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:43.608432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:43.611303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:44.612148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:44.612148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:44.615841    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:45.616729    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:45.616729    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.619383    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:45.619383    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:45.619913    8792 type.go:168] "Request Body" body=""
	I1212 20:01:45.619962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.624234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:46.624440    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:46.624440    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:46.631606    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1212 20:01:47.631772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:47.631772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:47.634254    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:48.635335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:48.635335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:48.638393    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:49.638538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:49.638538    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:49.642244    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:50.643486    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:50.643486    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:50.646864    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:51.647407    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:51.648062    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:51.651297    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:52.652310    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:52.652310    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:52.656003    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:53.657050    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:53.657050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:53.660358    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:54.661093    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:54.661093    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:54.664217    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:55.665772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:55.665772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.669789    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:01:55.669789    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:55.669789    8792 type.go:168] "Request Body" body=""
	I1212 20:01:55.669789    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.672845    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:56.673184    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:56.673578    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:56.676091    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:57.677260    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:57.677260    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:57.680492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:58.680999    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:58.681801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:58.684437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:59.685343    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:59.685343    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:59.688492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:00.689226    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:00.689226    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:00.692407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:01.693054    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:01.693054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:01.696414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:02.696707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:02.696707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:02.700656    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:03.701360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:03.701764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:03.704532    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:04.705055    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:04.705395    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:04.709582    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:05.709819    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:05.709819    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.712925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:05.712925    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:05.712925    8792 type.go:168] "Request Body" body=""
	I1212 20:02:05.712925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.714981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:06.715647    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:06.715989    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:06.718856    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:07.719549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:07.719950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:07.723017    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:08.723622    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:08.723991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:08.726824    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:09.727519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:09.727519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:09.731398    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:10.731940    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:10.732255    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:10.735314    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:11.736266    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:11.736266    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:11.739684    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:12.740926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:12.741346    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:12.744101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:13.745071    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:13.745071    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:13.749298    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:14.749764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:14.749764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:14.753277    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:15.753345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:15.753345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.755998    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:02:15.756520    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:15.756618    8792 type.go:168] "Request Body" body=""
	I1212 20:02:15.756676    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.758786    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:16.759785    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:16.759785    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:16.763359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:17.763591    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:17.763591    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:17.767014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:18.767248    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:18.767248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:18.770795    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:19.770962    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:19.770962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:19.773337    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:20.774557    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:20.774557    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:20.777421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:21.778527    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:21.778968    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:21.782312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:22.783001    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:22.783358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:22.785874    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:23.786668    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:23.786668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:23.789637    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:24.790000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:24.790000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:24.793439    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:25.793897    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:25.793897    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.797842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:25.797972    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:25.797972    8792 type.go:168] "Request Body" body=""
	I1212 20:02:25.797972    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.800999    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:26.801297    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:26.801297    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:26.804559    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:27.805028    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:27.805383    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:27.808770    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:28.809311    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:28.809864    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:28.812697    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:29.812980    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:29.812980    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:29.816569    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:30.816822    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:30.816822    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:30.819812    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:31.820344    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:31.820344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:31.824040    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:32.825223    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:32.825223    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:32.828636    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:33.828922    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:33.828922    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:33.833012    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:34.834105    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:34.834781    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:34.837739    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:35.838239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:35.839054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.842296    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:35.842377    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:35.842447    8792 type.go:168] "Request Body" body=""
	I1212 20:02:35.842525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.845253    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:36.845542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:36.845878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:36.849197    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:37.849575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:37.849575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:37.852774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:38.853254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:38.853925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:38.857020    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:39.857636    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:39.857636    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:39.861466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:40.861880    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:40.862546    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:40.865734    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:41.866931    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:41.866931    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:41.870407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:42.871284    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:42.871284    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:42.875909    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:43.876145    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:43.876145    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:43.879252    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:44.879595    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:44.879595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:44.882581    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:45.882793    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:45.882793    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.886772    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:45.886823    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:45.886823    8792 type.go:168] "Request Body" body=""
	I1212 20:02:45.886823    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.889488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:46.889817    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:46.889817    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:46.892533    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:47.893171    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:47.893605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:47.897327    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:48.898243    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:48.898243    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:48.901190    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:49.901751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:49.902239    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:49.905447    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:50.905509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:50.905509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:50.908968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:51.909246    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:51.909595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:51.913571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:52.914178    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:52.914178    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:52.917630    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:53.918264    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:53.918264    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:53.921578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:54.921843    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:54.921843    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:54.925388    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:55.925667    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:55.925667    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.929367    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:55.929367    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:55.929367    8792 type.go:168] "Request Body" body=""
	I1212 20:02:55.929367    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.932191    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:56.932533    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:56.932533    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:56.936530    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:57.937538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:57.937902    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:57.940876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:58.941300    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:58.941300    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:58.944722    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:59.945325    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:59.945325    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:59.948320    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:00.948833    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:00.948833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:00.952416    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:01.953225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:01.953225    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:01.956654    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:02.956910    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:02.956910    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:02.959952    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:03.960484    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:03.961032    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:03.963951    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:04.965244    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:04.965633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:04.968258    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:05.968774    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:05.968774    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.971651    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:05.971651    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:05.971651    8792 type.go:168] "Request Body" body=""
	I1212 20:03:05.971651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.974027    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:06.974449    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:06.974741    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:06.977205    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:07.977634    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:07.977798    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:07.981006    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:08.982134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:08.982134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:08.985063    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:09.985961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:09.985961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:09.988609    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:10.988755    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:10.988755    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:10.991472    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:11.992370    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:11.992370    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:11.996488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:12.996868    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:12.997258    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:13.000762    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:14.001059    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:14.001059    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:14.004368    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:15.004777    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:15.004777    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:15.007757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:16.008339    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:16.008625    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.011236    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:16.011236    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:16.011236    8792 type.go:168] "Request Body" body=""
	I1212 20:03:16.011236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.013832    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:17.014609    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:17.014609    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:17.018477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:18.018689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:18.018689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:18.022881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:19.023377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:19.023377    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:19.027571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:20.028073    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:20.028073    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:20.031057    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:21.031744    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:21.032211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:21.035492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:22.036462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:22.036462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:22.038986    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:23.039813    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:23.040216    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:23.042835    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:24.043623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:24.043623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:24.047746    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:25.048465    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:25.048465    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:25.051125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:26.051732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:26.051732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.055363    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:03:26.055363    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:26.055363    8792 type.go:168] "Request Body" body=""
	I1212 20:03:26.055363    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.058940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:27.059108    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:27.059476    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:27.062503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:28.062870    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:28.062870    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:28.066764    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:29.067215    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:29.067215    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:29.069923    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:30.070845    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:30.070845    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:30.073412    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:31.074536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:31.074979    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:31.077758    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:32.078060    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:32.078060    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:32.082117    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:33.083505    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:33.083505    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:33.086255    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:34.087642    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:34.087642    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:34.090378    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:34.543368    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 20:03:34.543799    8792 node_ready.go:38] duration metric: took 6m0.000497s for node "functional-468800" to be "Ready" ...
	I1212 20:03:34.547199    8792 out.go:203] 
	W1212 20:03:34.550016    8792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:03:34.550016    8792 out.go:285] * 
	* 
	W1212 20:03:34.552052    8792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:03:34.555048    8792 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-468800 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m10.4538802s for "functional-468800" cluster.
I1212 20:03:35.339323   13396 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (604.812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.193088s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-461000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image save --daemon kicbase/echo-server:functional-461000 --alsologtostderr                           │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list                                                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ dashboard      │ --url --port 36195 -p functional-461000 --alsologtostderr -v=1                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list -o json                                                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service --namespace=default --https --url hello-node                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format yaml --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ ssh            │ functional-461000 ssh pgrep buildkitd                                                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ image          │ functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete         │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start          │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start          │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:57:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:57:24.956785    8792 out.go:360] Setting OutFile to fd 1808 ...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:24.998786    8792 out.go:374] Setting ErrFile to fd 1700...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:25.011786    8792 out.go:368] Setting JSON to false
	I1212 19:57:25.013782    8792 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3583,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:57:25.013782    8792 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:57:25.016780    8792 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:57:25.020780    8792 notify.go:221] Checking for updates...
	I1212 19:57:25.022780    8792 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:25.024782    8792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:25.027780    8792 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:57:25.030779    8792 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:57:25.034782    8792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:25.037790    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:25.037790    8792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:57:25.155476    8792 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:57:25.159985    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.387868    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.372369133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.391884    8792 out.go:179] * Using the docker driver based on existing profile
	I1212 19:57:25.396868    8792 start.go:309] selected driver: docker
	I1212 19:57:25.396868    8792 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.396868    8792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:25.402871    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.622678    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.606400505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.701623    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:25.701623    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:25.701623    8792 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.706631    8792 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:57:25.708636    8792 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:57:25.711883    8792 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:57:25.714043    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:25.714043    8792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:57:25.714043    8792 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:57:25.714043    8792 cache.go:65] Caching tarball of preloaded images
	I1212 19:57:25.714043    8792 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:25.714043    8792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:57:25.714043    8792 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:57:25.792275    8792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:57:25.792275    8792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:57:25.792275    8792 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:57:25.792275    8792 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:25.792275    8792 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 19:57:25.792275    8792 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:57:25.792275    8792 fix.go:54] fixHost starting: 
	I1212 19:57:25.799955    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:25.853025    8792 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 19:57:25.853025    8792 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:57:25.856025    8792 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 19:57:25.856025    8792 machine.go:94] provisionDockerMachine start ...
	I1212 19:57:25.859025    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:25.918375    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:25.918479    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:25.918479    8792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:57:26.103358    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.103411    8792 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:57:26.107534    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.162431    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.162900    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.163030    8792 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:57:26.366993    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.370927    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.421027    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.422025    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.422025    8792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:26.592472    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:26.592472    8792 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:57:26.592472    8792 ubuntu.go:190] setting up certificates
	I1212 19:57:26.592472    8792 provision.go:84] configureAuth start
	I1212 19:57:26.596494    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:26.648327    8792 provision.go:143] copyHostCerts
	I1212 19:57:26.648492    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:57:26.648569    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:57:26.649807    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:57:26.649946    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:57:26.650879    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:57:26.650879    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:57:26.651440    8792 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:57:26.782013    8792 provision.go:177] copyRemoteCerts
	I1212 19:57:26.785479    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:26.788240    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.842524    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:26.968619    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 19:57:26.968964    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:57:26.995759    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 19:57:26.995759    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:57:27.024847    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 19:57:27.024847    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:57:27.057221    8792 provision.go:87] duration metric: took 464.7444ms to configureAuth
	I1212 19:57:27.057221    8792 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:57:27.057221    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:27.061251    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.121889    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.122548    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.122604    8792 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:57:27.313910    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:57:27.313910    8792 ubuntu.go:71] root file system type: overlay
	I1212 19:57:27.313910    8792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:57:27.317488    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.376486    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.377052    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.377052    8792 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:57:27.577536    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:57:27.581688    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.635455    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.635931    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.635954    8792 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:57:27.828516    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:27.828574    8792 machine.go:97] duration metric: took 1.9725293s to provisionDockerMachine
	I1212 19:57:27.828619    8792 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:57:27.828619    8792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:27.833127    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:27.836440    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.891552    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.022421    8792 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:28.031829    8792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_ID="12"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:57:28.031829    8792 command_runner.go:130] > ID=debian
	I1212 19:57:28.031829    8792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:57:28.031829    8792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:57:28.031829    8792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:57:28.031829    8792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:57:28.031829    8792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:57:28.031829    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:57:28.032546    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:57:28.033148    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:57:28.033204    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /etc/ssl/certs/133962.pem
	I1212 19:57:28.033277    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:57:28.033277    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> /etc/test/nested/copy/13396/hosts
	I1212 19:57:28.037935    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:57:28.050821    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:57:28.081156    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:57:28.109846    8792 start.go:296] duration metric: took 281.2243ms for postStartSetup
	I1212 19:57:28.115818    8792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:28.118674    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.171853    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.302700    8792 command_runner.go:130] > 1%
	I1212 19:57:28.308193    8792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:57:28.316146    8792 command_runner.go:130] > 950G
	I1212 19:57:28.316204    8792 fix.go:56] duration metric: took 2.5239035s for fixHost
	I1212 19:57:28.316204    8792 start.go:83] releasing machines lock for "functional-468800", held for 2.5239035s
	I1212 19:57:28.320187    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:28.373764    8792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:57:28.378728    8792 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:28.378728    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.382043    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.432252    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.433503    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.550849    8792 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1212 19:57:28.550961    8792 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:57:28.550961    8792 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:57:28.556187    8792 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:28.565686    8792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:57:28.565686    8792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:57:28.570074    8792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:57:28.577782    8792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:57:28.578775    8792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:28.583114    8792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:28.595283    8792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:57:28.595283    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:28.595283    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:28.595283    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:28.617880    8792 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:57:28.622700    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:57:28.640953    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:57:28.655059    8792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:57:28.659503    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 19:57:28.659726    8792 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:57:28.659726    8792 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:57:28.678759    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.696413    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:57:28.715842    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.736528    8792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:28.755951    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:57:28.776240    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:57:28.795721    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:57:28.815051    8792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:28.829778    8792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:57:28.834204    8792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:28.852899    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:28.995620    8792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:57:29.167559    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:29.167559    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:29.172911    8792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Unit]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 19:57:29.191693    8792 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 19:57:29.191693    8792 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1212 19:57:29.191693    8792 command_runner.go:130] > Wants=network-online.target containerd.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > Requires=docker.socket
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitBurst=3
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Service]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Type=notify
	I1212 19:57:29.191693    8792 command_runner.go:130] > Restart=always
	I1212 19:57:29.191693    8792 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 19:57:29.191693    8792 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 19:57:29.191693    8792 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 19:57:29.191693    8792 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 19:57:29.191693    8792 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 19:57:29.191693    8792 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 19:57:29.191693    8792 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 19:57:29.191693    8792 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNOFILE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNPROC=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitCORE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 19:57:29.191693    8792 command_runner.go:130] > TasksMax=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > TimeoutStartSec=0
	I1212 19:57:29.191693    8792 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 19:57:29.191693    8792 command_runner.go:130] > Delegate=yes
	I1212 19:57:29.191693    8792 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 19:57:29.191693    8792 command_runner.go:130] > KillMode=process
	I1212 19:57:29.191693    8792 command_runner.go:130] > OOMScoreAdjust=-500
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Install]
	I1212 19:57:29.191693    8792 command_runner.go:130] > WantedBy=multi-user.target
	I1212 19:57:29.196788    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.221924    8792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:29.312337    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.337554    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:57:29.357559    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:29.379522    8792 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 19:57:29.384213    8792 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:57:29.390808    8792 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 19:57:29.396438    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:57:29.409074    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:57:29.434191    8792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:57:29.578871    8792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:57:29.719341    8792 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:57:29.719341    8792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:57:29.746173    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:57:29.768870    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:29.905737    8792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:57:30.757640    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:30.780953    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:57:30.802218    8792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 19:57:30.829184    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:30.853409    8792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:57:30.994012    8792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:57:31.134627    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.283484    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:57:31.309618    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:57:31.333897    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.475108    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:57:31.578219    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:31.597007    8792 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:57:31.600988    8792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:57:31.610316    8792 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 19:57:31.611281    8792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:57:31.611281    8792 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Modify: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Change: 2025-12-12 19:57:31.484639595 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] >  Birth: -
	I1212 19:57:31.611281    8792 start.go:564] Will wait 60s for crictl version
	I1212 19:57:31.615844    8792 ssh_runner.go:195] Run: which crictl
	I1212 19:57:31.621876    8792 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:57:31.626999    8792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:57:31.672687    8792 command_runner.go:130] > Version:  0.1.0
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeName:  docker
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:57:31.672790    8792 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:57:31.676132    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.713311    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.716489    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.755737    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.761482    8792 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:57:31.765357    8792 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:57:31.901903    8792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:57:31.906530    8792 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:57:31.913687    8792 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1212 19:57:31.917320    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:31.973317    8792 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:57:31.973590    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:31.977450    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.013673    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.013673    8792 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:57:32.017349    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.047537    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.047537    8792 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:57:32.047537    8792 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:57:32.048190    8792 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:57:32.051146    8792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:57:32.121447    8792 command_runner.go:130] > cgroupfs
	I1212 19:57:32.121447    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:32.121447    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:32.121447    8792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:32.121964    8792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:32.122106    8792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:32.126035    8792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:57:32.138764    8792 command_runner.go:130] > kubeadm
	I1212 19:57:32.138798    8792 command_runner.go:130] > kubectl
	I1212 19:57:32.138825    8792 command_runner.go:130] > kubelet
	I1212 19:57:32.138845    8792 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:57:32.143533    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:32.155602    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:57:32.179900    8792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:57:32.199342    8792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:57:32.222871    8792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:32.229151    8792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:57:32.234589    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:32.373967    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:32.974236    8792 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:57:32.974236    8792 certs.go:195] generating shared ca certs ...
	I1212 19:57:32.974236    8792 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:57:32.975214    8792 certs.go:257] generating profile certs ...
	I1212 19:57:32.976191    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:57:32.976561    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:57:32.976892    8792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:57:32.977527    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:57:32.977863    8792 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:57:32.978401    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:57:32.978646    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:57:32.979304    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem -> /usr/share/ca-certificates/13396.pem
	I1212 19:57:32.979449    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /usr/share/ca-certificates/133962.pem
	I1212 19:57:32.979529    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:32.980729    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:33.008686    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:57:33.035660    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:33.063247    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:33.108547    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:57:33.138500    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:57:33.165883    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:33.195246    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:57:33.221022    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:57:33.248791    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:57:33.274438    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:33.302337    8792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:33.324312    8792 ssh_runner.go:195] Run: openssl version
	I1212 19:57:33.335263    8792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:57:33.339948    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.356389    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:57:33.375441    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.387660    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.430281    8792 command_runner.go:130] > 51391683
	I1212 19:57:33.435287    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:57:33.452481    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.471523    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:57:33.489874    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.502698    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.544550    8792 command_runner.go:130] > 3ec20f2e
	I1212 19:57:33.549548    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:57:33.566747    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.583990    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:57:33.600438    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.614484    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.657826    8792 command_runner.go:130] > b5213941
	I1212 19:57:33.662138    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:57:33.678498    8792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:57:33.685111    8792 command_runner.go:130] > Device: 8,48	Inode: 15292       Links: 1
	I1212 19:57:33.685111    8792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:57:33.685797    8792 command_runner.go:130] > Access: 2025-12-12 19:53:20.728281925 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Modify: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Change: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] >  Birth: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.689949    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:57:33.733144    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.737823    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:57:33.780151    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.785054    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:57:33.827773    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.833292    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:57:33.875401    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.880293    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:57:33.922924    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.927940    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:57:33.970239    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.970239    8792 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:33.976672    8792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:57:34.008252    8792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:34.020977    8792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:57:34.021108    8792 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:57:34.021108    8792 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:57:34.025234    8792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:57:34.045139    8792 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:34.049590    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.107138    8792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.107889    8792 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-468800" cluster setting kubeconfig missing "functional-468800" context setting]
	I1212 19:57:34.107889    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.126355    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.126843    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.128169    8792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:57:34.128230    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.128230    8792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:57:34.132435    8792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:57:34.149951    8792 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 19:57:34.150008    8792 kubeadm.go:602] duration metric: took 128.8994ms to restartPrimaryControlPlane
	I1212 19:57:34.150032    8792 kubeadm.go:403] duration metric: took 179.7913ms to StartCluster
	I1212 19:57:34.150032    8792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.150032    8792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.151180    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.152111    8792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:57:34.152111    8792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:57:34.152386    8792 addons.go:70] Setting storage-provisioner=true in profile "functional-468800"
	I1212 19:57:34.152386    8792 addons.go:70] Setting default-storageclass=true in profile "functional-468800"
	I1212 19:57:34.152426    8792 addons.go:239] Setting addon storage-provisioner=true in "functional-468800"
	I1212 19:57:34.152475    8792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-468800"
	I1212 19:57:34.152564    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.152599    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:34.155555    8792 out.go:179] * Verifying Kubernetes components...
	I1212 19:57:34.161161    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.161613    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.163072    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:34.221534    8792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:34.221534    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.221534    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.222943    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.223481    8792 addons.go:239] Setting addon default-storageclass=true in "functional-468800"
	I1212 19:57:34.223558    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.223558    8792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.223558    8792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:57:34.227691    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.230256    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.287093    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.289848    8792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.289848    8792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:57:34.293811    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.345554    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:34.348560    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.426758    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.480013    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.480104    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.534162    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.538400    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538479    8792 retry.go:31] will retry after 344.600735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538532    8792 node_ready.go:35] waiting up to 6m0s for node "functional-468800" to be "Ready" ...
	I1212 19:57:34.539394    8792 type.go:168] "Request Body" body=""
	I1212 19:57:34.539597    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:34.541949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:34.608531    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.613599    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.613599    8792 retry.go:31] will retry after 216.683996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.835959    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.887701    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.908576    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.913475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.913475    8792 retry.go:31] will retry after 230.473341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.961197    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.966061    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.966061    8792 retry.go:31] will retry after 349.771822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.150121    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.221040    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.228247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.228333    8792 retry.go:31] will retry after 512.778483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.321063    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.394131    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.397148    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.397148    8792 retry.go:31] will retry after 487.352123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.542707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:35.542707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:35.545160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:35.747496    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.819613    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.822659    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.822659    8792 retry.go:31] will retry after 1.154413243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.890743    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.965246    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.972460    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.972460    8792 retry.go:31] will retry after 1.245938436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:36.545730    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:36.545730    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:36.549771    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:36.983387    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:37.090901    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.094847    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.094847    8792 retry.go:31] will retry after 1.548342934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.223991    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:37.295689    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.299705    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.299769    8792 retry.go:31] will retry after 1.579528606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.551013    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:37.551013    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:37.554154    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:38.554939    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:38.555432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:38.558234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:38.649390    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:38.725500    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.729499    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.729499    8792 retry.go:31] will retry after 2.648471583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.884600    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:38.953302    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.958318    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.958318    8792 retry.go:31] will retry after 2.058418403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:39.559077    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:39.559356    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:39.562225    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:40.562954    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:40.563393    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:40.566347    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:41.022091    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:41.102318    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.106247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.106247    8792 retry.go:31] will retry after 3.080320353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.384408    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:41.470520    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.473795    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.473795    8792 retry.go:31] will retry after 2.343057986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.566604    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:41.566604    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:41.569639    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:42.569950    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:42.569950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:42.573153    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:43.573545    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:43.573545    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:43.577655    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:43.821674    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:43.897847    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:43.901846    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:43.901846    8792 retry.go:31] will retry after 5.566518346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.193277    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:44.263403    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:44.269459    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.269459    8792 retry.go:31] will retry after 4.550082482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.577835    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:44.577835    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.580876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:44.581034    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:44.581158    8792 type.go:168] "Request Body" body=""
	I1212 19:57:44.581244    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.583508    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:45.583961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:45.583961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:45.587161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:46.587855    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:46.588199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:46.590728    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:47.591504    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:47.591504    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:47.594168    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:48.595392    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:48.595392    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:48.601208    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:57:48.824534    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:48.903714    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:48.909283    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:48.909283    8792 retry.go:31] will retry after 5.408295828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.475338    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:49.554836    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:49.559515    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.559515    8792 retry.go:31] will retry after 7.920709676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.602224    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:49.602480    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:49.605147    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:50.605575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:50.605575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:50.609094    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:51.610210    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:51.610210    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:51.613279    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:52.613438    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:52.613438    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:52.617857    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:53.618444    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:53.618444    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:53.622009    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:54.323567    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:54.399774    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:54.402767    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.402767    8792 retry.go:31] will retry after 5.650885129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.622233    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:54.622233    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.625806    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:54.625833    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:54.625833    8792 type.go:168] "Request Body" body=""
	I1212 19:57:54.625833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.628220    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:55.628567    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:55.628567    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:55.632067    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:56.632335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:56.632737    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:56.635417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:57.485659    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:57.566715    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:57.570725    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.570725    8792 retry.go:31] will retry after 5.889801353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.635601    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:57.636162    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:57.638437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:58.639201    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:58.639201    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:58.641202    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:59.642751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:59.642751    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:59.645820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:00.059077    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:00.141196    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:00.144743    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.144828    8792 retry.go:31] will retry after 12.880427161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.646278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:00.646278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:00.648514    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:01.648554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:01.648554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:01.652477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:02.652719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:02.652719    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:02.656865    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:03.466574    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:03.546687    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:03.552160    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.552160    8792 retry.go:31] will retry after 8.684375444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.657068    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:03.657068    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:03.660376    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:04.660836    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:04.661165    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.664417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:04.664489    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:04.664634    8792 type.go:168] "Request Body" body=""
	I1212 19:58:04.664723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.667029    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:05.667419    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:05.667419    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:05.670032    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:06.670984    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:06.670984    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:06.674354    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:07.675175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:07.675473    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:07.678161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:08.679000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:08.679000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:08.682498    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:09.683536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:09.684039    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:09.686703    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:10.687176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:10.687514    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:10.691708    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:11.692097    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:11.692097    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:11.695419    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:12.243184    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.329214    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:12.335592    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.335592    8792 retry.go:31] will retry after 19.078221738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.695735    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:12.695735    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:12.698564    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:13.030727    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:13.107677    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:13.111475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.111475    8792 retry.go:31] will retry after 24.078034123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.699329    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:13.699329    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:13.703201    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:14.703632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:14.703632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.706632    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:14.706632    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:14.706632    8792 type.go:168] "Request Body" body=""
	I1212 19:58:14.706632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.709461    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:15.709987    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:15.709987    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:15.713881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:16.714426    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:16.714947    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:16.717509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:17.718027    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:17.718027    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:17.721452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:18.721719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:18.722180    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:18.725521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:19.726174    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:19.726174    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:19.731274    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:20.731838    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:20.731838    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:20.735774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:21.736083    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:21.736083    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:21.739364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:22.740462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:22.740462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:22.743494    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:23.744218    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:23.744882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:23.747961    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:24.748401    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:24.748401    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.752939    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:58:24.752939    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:24.752939    8792 type.go:168] "Request Body" body=""
	I1212 19:58:24.752939    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.756295    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:25.756593    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:25.756959    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:25.759330    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:26.760825    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:26.760825    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:26.765414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:27.765653    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:27.765653    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:27.769152    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:28.770176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:28.770595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:28.774341    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:29.774498    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:29.774498    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:29.777488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:30.778437    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:30.778437    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:30.781414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:31.419403    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:31.498102    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:31.502554    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.502554    8792 retry.go:31] will retry after 21.655222228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.781482    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:31.781482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:31.783476    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:32.785130    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:32.785130    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:32.787452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:33.788547    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:33.788547    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:33.791489    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:34.792428    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:34.792428    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.794457    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:34.794457    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:34.794457    8792 type.go:168] "Request Body" body=""
	I1212 19:58:34.794457    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.796423    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:35.796926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:35.796926    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:35.800403    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:36.800694    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:36.800694    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:36.803902    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:37.195194    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:37.275035    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:37.278655    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.278655    8792 retry.go:31] will retry after 33.639329095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.804194    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:37.804194    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:37.807496    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:38.808801    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:38.808801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:38.811801    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:39.812262    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:39.812262    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:39.815469    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:40.816141    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:40.816141    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:40.819310    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:41.819973    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:41.819973    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:41.823039    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:42.824053    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:42.824053    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:42.827675    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:43.828345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:43.828345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:43.830350    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:44.830883    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:44.830883    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.834425    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:44.834502    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:44.834607    8792 type.go:168] "Request Body" body=""
	I1212 19:58:44.834703    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.836790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:45.837202    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:45.837202    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:45.840615    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:46.840700    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:46.840700    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:46.843992    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:47.844334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:47.844334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:47.847669    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:48.848509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:48.848509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:48.851509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:49.852471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:49.852471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:49.855417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:50.855889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:50.855889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:50.858888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:51.859324    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:51.859324    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:51.862752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:52.863764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:52.863764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:52.867051    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:53.163493    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:53.239799    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245721    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245920    8792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:58:53.867924    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:53.867924    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:53.871211    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:54.872502    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:54.872502    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.875103    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:54.875103    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:54.875635    8792 type.go:168] "Request Body" body=""
	I1212 19:58:54.875635    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.878074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:55.878391    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:55.878391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:55.881700    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:56.882314    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:56.882731    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:56.885332    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:57.886661    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:57.886661    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:57.890321    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:58.891069    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:58.891069    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:58.894045    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:59.894455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:59.894455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:59.897144    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:00.897724    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:00.897724    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:00.900925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:01.901327    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:01.901327    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:01.904820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:02.905377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:02.905668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:02.908844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:03.909357    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:03.909357    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:03.912567    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:04.913190    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:04.913190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.916248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:04.916248    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:04.916248    8792 type.go:168] "Request Body" body=""
	I1212 19:59:04.916248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.918608    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:05.918787    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:05.919084    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:05.921580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:06.921873    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:06.921873    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:06.925988    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:07.927045    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:07.927045    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:07.930359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:08.930575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:08.930575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:08.934014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:09.935175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:09.935175    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:09.939760    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:10.923536    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:59:10.940298    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:10.940298    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:10.942578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:11.011286    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:59:11.015418    8792 out.go:179] * Enabled addons: 
	I1212 19:59:11.018366    8792 addons.go:530] duration metric: took 1m36.8652549s for enable addons: enabled=[]
	I1212 19:59:11.943695    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:11.943695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:11.946524    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:12.947004    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:12.947004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:12.950107    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:13.950403    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:13.950403    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:13.953492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:14.953762    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:14.953762    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.957001    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:14.957153    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:14.957292    8792 type.go:168] "Request Body" body=""
	I1212 19:59:14.957344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.959399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:15.959732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:15.959732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:15.963481    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:16.964631    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:16.964631    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:16.967431    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:17.968335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:17.968716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:17.971422    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:18.975421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:18.975482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:18.981353    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:19.982483    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:19.982483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:19.986458    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:20.986878    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:20.986878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:20.990580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:21.991705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:21.991705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:21.994313    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:22.994828    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:22.994828    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:22.998384    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:23.999291    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:23.999572    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:24.001757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:25.002197    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:25.002197    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.006076    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:25.006076    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:25.006076    8792 type.go:168] "Request Body" body=""
	I1212 19:59:25.006076    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.008833    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:26.009236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:26.009483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:26.013280    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:27.013991    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:27.013991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:27.017339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:28.017861    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:28.017861    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:28.020302    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:29.021278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:29.021278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:29.024910    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:30.025134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:30.025134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:30.028490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:31.029228    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:31.029228    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:31.032192    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:32.033358    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:32.033358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:32.037022    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:33.037052    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:33.037052    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:33.039997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:34.040974    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:34.040974    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:34.044336    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:35.045158    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:35.045158    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.050424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:35.050478    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:35.050634    8792 type.go:168] "Request Body" body=""
	I1212 19:59:35.050710    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.053272    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:36.053659    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:36.053659    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:36.056921    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:37.057862    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:37.057983    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:37.061055    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:38.061705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:38.061705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:38.064401    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:39.065070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:39.065070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:39.070212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:40.070745    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:40.070745    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:40.074056    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:41.074238    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:41.074238    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:41.077817    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:42.078786    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:42.078786    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:42.082102    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:43.082439    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:43.082849    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:43.086074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:44.086257    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:44.086257    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:44.089158    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:45.089746    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:45.089746    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.093004    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:45.093004    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:45.093004    8792 type.go:168] "Request Body" body=""
	I1212 19:59:45.093004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.096683    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:46.097116    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:46.097615    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:46.100214    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:47.101361    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:47.101361    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:47.104657    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:48.104994    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:48.104994    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:48.108049    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:49.109535    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:49.109535    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:49.112664    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:50.113614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:50.113614    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:50.117411    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:51.117709    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:51.117709    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:51.121291    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:52.121914    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:52.122224    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:52.125068    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:53.125697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:53.126105    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:53.129084    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:54.129467    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:54.129467    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:54.133149    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:55.133722    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:55.133722    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.139098    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:55.139630    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:55.139774    8792 type.go:168] "Request Body" body=""
	I1212 19:59:55.139830    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.142212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:56.142471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:56.142471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:56.145561    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:57.146754    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:57.146754    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:57.150691    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:58.151315    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:58.151315    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:58.153802    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:59.154632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:59.154632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:59.157895    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:00.158286    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:00.158286    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:00.161521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:01.161851    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:01.161851    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:01.165478    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:02.166140    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:02.166140    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:02.169015    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:03.169549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:03.169549    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:03.179028    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	I1212 20:00:04.179254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:04.179632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:04.182303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:05.183057    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:05.183057    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.186169    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:05.186202    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:05.186368    8792 type.go:168] "Request Body" body=""
	I1212 20:00:05.186427    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.188490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:06.189369    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:06.189369    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:06.191767    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:07.192287    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:07.192287    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:07.195873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:08.196564    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:08.196564    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:08.200301    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:09.200652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:09.201050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:09.203873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:10.204621    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:10.204621    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:10.207991    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:11.208169    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:11.208695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:11.211546    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:12.212265    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:12.212265    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:12.215652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:13.216481    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:13.216481    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:13.218808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:14.219114    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:14.219114    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:14.222371    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:15.223587    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:15.223882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.226696    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:15.226696    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:15.226696    8792 type.go:168] "Request Body" body=""
	I1212 20:00:15.227288    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.230014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:16.230255    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:16.230702    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:16.234073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:17.234537    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:17.234537    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:17.238981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:18.240162    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:18.240450    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:18.242671    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:19.244029    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:19.244029    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:19.247551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:20.248288    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:20.248689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:20.251486    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:21.252448    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:21.252448    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:21.255871    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:22.256129    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:22.256129    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:22.259292    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:23.259853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:23.260152    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:23.263166    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:24.264181    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:24.264523    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:24.267309    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:25.267655    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:25.267655    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.270583    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:25.270681    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:25.270716    8792 type.go:168] "Request Body" body=""
	I1212 20:00:25.270716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.272780    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:26.273236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:26.273236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:26.276531    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:27.277612    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:27.277612    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:27.280399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:28.280976    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:28.281348    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:28.284050    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:29.284889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:29.284889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:29.288318    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:30.289605    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:30.289605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:30.292210    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:31.292623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:31.292623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:31.296173    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:32.297272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:32.297272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:32.300365    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:33.300747    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:33.300747    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:33.304627    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:34.305148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:34.305148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:34.307286    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:35.308221    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:35.308221    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.311525    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:35.311525    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:35.311525    8792 type.go:168] "Request Body" body=""
	I1212 20:00:35.311525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.314768    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:36.315303    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:36.315803    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:36.319885    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:37.320651    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:37.320651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:37.323804    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:38.324633    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:38.324633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:38.327596    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:39.328167    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:39.328827    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:39.332387    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:40.335388    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:40.335388    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:40.341222    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:00:41.342293    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:41.342293    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:41.346503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:42.346733    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:42.347391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:42.349901    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:43.350351    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:43.350351    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:43.353790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:44.354356    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:44.354951    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:44.357421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:45.357936    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:45.358254    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.361424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:45.361488    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:45.361558    8792 type.go:168] "Request Body" body=""
	I1212 20:00:45.361734    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.364678    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:46.364915    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:46.364915    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:46.368243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:47.368380    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:47.368380    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:47.371842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:48.372123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:48.372496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:48.375782    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:49.376328    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:49.376328    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:49.379339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:50.379689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:50.380090    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:50.383968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:51.384253    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:51.384253    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:51.387625    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:52.388421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:52.388421    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:52.391331    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:53.392103    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:53.392524    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:53.395936    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:54.396522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:54.396914    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:54.399312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:55.399853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:55.399853    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.404011    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:00:55.404054    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:55.404190    8792 type.go:168] "Request Body" body=""
	I1212 20:00:55.404190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.406466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:56.406717    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:56.406717    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:56.409652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:57.409829    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:57.409829    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:57.413808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:58.414272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:58.414272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:58.416891    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:59.418094    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:59.418094    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:59.422379    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:00.422928    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:00.423211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:00.425511    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:01.426949    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:01.427372    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:01.429940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:02.430697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:02.430894    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:02.434142    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:03.434554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:03.434554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:03.438125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:04.438646    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:04.438646    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:04.441873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:05.442580    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:05.443007    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.445227    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:05.445288    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:05.445349    8792 type.go:168] "Request Body" body=""
	I1212 20:01:05.445349    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.447160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 20:01:06.448042    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:06.448299    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:06.451364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:07.451519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:07.451519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:07.454072    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:08.455225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:08.455581    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:08.458949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:09.459239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:09.459483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:09.462124    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:10.462488    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:10.462488    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:10.465073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:11.466146    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:11.466334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:11.468858    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:12.469556    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:12.469556    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:12.472263    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:13.473070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:13.473070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:13.476554    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:14.476996    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:14.477386    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:14.479751    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:15.480652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:15.480652    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.484243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:15.484268    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:15.484379    8792 type.go:168] "Request Body" body=""
	I1212 20:01:15.484379    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.486997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:16.487837    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:16.487837    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:16.491073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:17.491865    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:17.492218    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:17.495307    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:18.495909    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:18.495909    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:18.499046    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:19.499542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:19.499542    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:19.502844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:20.503664    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:20.503664    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:20.506838    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:21.507123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:21.507496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:21.510126    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:22.510522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:22.510522    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:22.513442    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:23.514259    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:23.514259    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:23.516261    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:24.517279    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:24.517279    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:24.520541    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:25.521455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:25.521455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.524551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:25.524625    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:25.524657    8792 type.go:168] "Request Body" body=""
	I1212 20:01:25.524657    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.527752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:26.528360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:26.528723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:26.532917    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:27.533242    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:27.533242    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:27.537366    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:28.538106    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:28.538495    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:28.543549    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:01:29.544680    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:29.544680    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:29.548232    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:30.548450    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:30.548850    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:30.552101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:31.552352    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:31.552352    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:31.556248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:32.556689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:32.556689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:32.560889    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:33.561227    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:33.561227    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:33.565100    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:34.566919    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:34.566919    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:34.573248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1212 20:01:35.574024    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:35.574411    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.577335    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:35.577335    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:35.577335    8792 type.go:168] "Request Body" body=""
	I1212 20:01:35.577335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.579846    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:36.580067    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:36.580067    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:36.582937    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:37.583614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:37.584133    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:37.588041    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:38.588334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:38.588334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:38.590836    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:39.591771    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:39.592199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:39.596300    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:40.596570    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:40.596570    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:40.599738    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:41.600585    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:41.600964    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:41.603618    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:42.604326    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:42.604326    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:42.607888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:43.608118    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:43.608432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:43.611303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:44.612148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:44.612148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:44.615841    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:45.616729    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:45.616729    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.619383    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:45.619383    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:45.619913    8792 type.go:168] "Request Body" body=""
	I1212 20:01:45.619962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.624234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:46.624440    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:46.624440    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:46.631606    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1212 20:01:47.631772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:47.631772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:47.634254    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:48.635335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:48.635335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:48.638393    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:49.638538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:49.638538    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:49.642244    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:50.643486    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:50.643486    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:50.646864    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:51.647407    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:51.648062    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:51.651297    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:52.652310    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:52.652310    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:52.656003    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:53.657050    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:53.657050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:53.660358    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:54.661093    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:54.661093    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:54.664217    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:55.665772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:55.665772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.669789    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:01:55.669789    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:55.669789    8792 type.go:168] "Request Body" body=""
	I1212 20:01:55.669789    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.672845    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:56.673184    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:56.673578    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:56.676091    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:57.677260    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:57.677260    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:57.680492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:58.680999    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:58.681801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:58.684437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:59.685343    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:59.685343    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:59.688492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:00.689226    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:00.689226    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:00.692407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:01.693054    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:01.693054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:01.696414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:02.696707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:02.696707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:02.700656    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:03.701360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:03.701764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:03.704532    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:04.705055    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:04.705395    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:04.709582    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:05.709819    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:05.709819    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.712925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:05.712925    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:05.712925    8792 type.go:168] "Request Body" body=""
	I1212 20:02:05.712925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.714981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:06.715647    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:06.715989    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:06.718856    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:07.719549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:07.719950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:07.723017    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:08.723622    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:08.723991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:08.726824    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:09.727519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:09.727519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:09.731398    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:10.731940    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:10.732255    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:10.735314    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:11.736266    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:11.736266    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:11.739684    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:12.740926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:12.741346    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:12.744101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:13.745071    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:13.745071    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:13.749298    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:14.749764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:14.749764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:14.753277    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:15.753345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:15.753345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.755998    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:02:15.756520    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:15.756618    8792 type.go:168] "Request Body" body=""
	I1212 20:02:15.756676    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.758786    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:16.759785    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:16.759785    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:16.763359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:17.763591    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:17.763591    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:17.767014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:18.767248    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:18.767248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:18.770795    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:19.770962    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:19.770962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:19.773337    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:20.774557    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:20.774557    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:20.777421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:21.778527    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:21.778968    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:21.782312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:22.783001    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:22.783358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:22.785874    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:23.786668    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:23.786668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:23.789637    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:24.790000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:24.790000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:24.793439    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:25.793897    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:25.793897    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.797842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:25.797972    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:25.797972    8792 type.go:168] "Request Body" body=""
	I1212 20:02:25.797972    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.800999    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:26.801297    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:26.801297    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:26.804559    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:27.805028    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:27.805383    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:27.808770    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:28.809311    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:28.809864    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:28.812697    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:29.812980    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:29.812980    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:29.816569    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:30.816822    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:30.816822    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:30.819812    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:31.820344    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:31.820344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:31.824040    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:32.825223    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:32.825223    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:32.828636    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:33.828922    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:33.828922    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:33.833012    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:34.834105    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:34.834781    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:34.837739    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:35.838239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:35.839054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.842296    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:35.842377    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:35.842447    8792 type.go:168] "Request Body" body=""
	I1212 20:02:35.842525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.845253    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:36.845542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:36.845878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:36.849197    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:37.849575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:37.849575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:37.852774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:38.853254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:38.853925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:38.857020    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:39.857636    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:39.857636    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:39.861466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:40.861880    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:40.862546    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:40.865734    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:41.866931    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:41.866931    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:41.870407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:42.871284    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:42.871284    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:42.875909    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:43.876145    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:43.876145    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:43.879252    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:44.879595    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:44.879595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:44.882581    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:45.882793    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:45.882793    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.886772    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:45.886823    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:45.886823    8792 type.go:168] "Request Body" body=""
	I1212 20:02:45.886823    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.889488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:46.889817    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:46.889817    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:46.892533    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:47.893171    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:47.893605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:47.897327    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:48.898243    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:48.898243    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:48.901190    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:49.901751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:49.902239    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:49.905447    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:50.905509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:50.905509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:50.908968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:51.909246    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:51.909595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:51.913571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:52.914178    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:52.914178    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:52.917630    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:53.918264    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:53.918264    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:53.921578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:54.921843    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:54.921843    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:54.925388    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:55.925667    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:55.925667    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.929367    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:55.929367    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:55.929367    8792 type.go:168] "Request Body" body=""
	I1212 20:02:55.929367    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.932191    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:56.932533    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:56.932533    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:56.936530    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:57.937538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:57.937902    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:57.940876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:58.941300    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:58.941300    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:58.944722    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:59.945325    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:59.945325    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:59.948320    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:00.948833    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:00.948833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:00.952416    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:01.953225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:01.953225    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:01.956654    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:02.956910    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:02.956910    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:02.959952    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:03.960484    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:03.961032    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:03.963951    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:04.965244    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:04.965633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:04.968258    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:05.968774    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:05.968774    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.971651    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:05.971651    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:05.971651    8792 type.go:168] "Request Body" body=""
	I1212 20:03:05.971651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.974027    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:06.974449    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:06.974741    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:06.977205    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:07.977634    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:07.977798    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:07.981006    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:08.982134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:08.982134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:08.985063    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:09.985961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:09.985961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:09.988609    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:10.988755    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:10.988755    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:10.991472    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:11.992370    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:11.992370    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:11.996488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:12.996868    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:12.997258    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:13.000762    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:14.001059    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:14.001059    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:14.004368    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:15.004777    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:15.004777    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:15.007757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:16.008339    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:16.008625    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.011236    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:16.011236    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:16.011236    8792 type.go:168] "Request Body" body=""
	I1212 20:03:16.011236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.013832    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:17.014609    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:17.014609    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:17.018477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:18.018689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:18.018689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:18.022881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:19.023377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:19.023377    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:19.027571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:20.028073    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:20.028073    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:20.031057    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:21.031744    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:21.032211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:21.035492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:22.036462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:22.036462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:22.038986    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:23.039813    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:23.040216    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:23.042835    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:24.043623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:24.043623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:24.047746    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:25.048465    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:25.048465    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:25.051125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:26.051732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:26.051732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.055363    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:03:26.055363    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:26.055363    8792 type.go:168] "Request Body" body=""
	I1212 20:03:26.055363    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.058940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:27.059108    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:27.059476    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:27.062503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:28.062870    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:28.062870    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:28.066764    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:29.067215    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:29.067215    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:29.069923    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:30.070845    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:30.070845    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:30.073412    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:31.074536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:31.074979    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:31.077758    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:32.078060    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:32.078060    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:32.082117    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:33.083505    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:33.083505    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:33.086255    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:34.087642    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:34.087642    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:34.090378    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:34.543368    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 20:03:34.543799    8792 node_ready.go:38] duration metric: took 6m0.000497s for node "functional-468800" to be "Ready" ...
	I1212 20:03:34.547199    8792 out.go:203] 
	W1212 20:03:34.550016    8792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:03:34.550016    8792 out.go:285] * 
	W1212 20:03:34.552052    8792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:03:34.555048    8792 out.go:203] 
	
	
	==> Docker <==
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644022398Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644029098Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644048100Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644083703Z" level=info msg="Initializing buildkit"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.744677695Z" level=info msg="Completed buildkit initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750002934Z" level=info msg="Daemon has completed initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750231253Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750252555Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 19:57:30 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750265456Z" level=info msg="API listen on [::]:2376"
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:30 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 19:57:31 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Loaded network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 19:57:31 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:03:37.089126   17388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:03:37.090263   17388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:03:37.092335   17388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:03:37.093574   17388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:03:37.094667   17388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000814] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000769] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000773] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 19:57] CPU: 0 PID: 53838 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000857] RIP: 0033:0x7ff47e100b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7ff47e100af6.
	[  +0.000659] RSP: 002b:00007ffe8b002070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000766] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001155] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001186] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001227] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001126] FS:  0000000000000000 GS:  0000000000000000
	[  +0.862009] CPU: 6 PID: 53976 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000896] RIP: 0033:0x7f0cd9433b20
	[  +0.000429] Code: Unable to access opcode bytes at RIP 0x7f0cd9433af6.
	[  +0.000694] RSP: 002b:00007fff41d09ce0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:03:37 up  1:05,  0 user,  load average: 0.30, 0.32, 0.59
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:03:34 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:03:34 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 12 20:03:34 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:34 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:34 functional-468800 kubelet[17224]: E1212 20:03:34.795754   17224 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:03:34 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:03:34 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:03:35 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 12 20:03:35 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:35 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:35 functional-468800 kubelet[17239]: E1212 20:03:35.523692   17239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:03:35 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:03:35 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:03:36 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 12 20:03:36 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:36 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:36 functional-468800 kubelet[17265]: E1212 20:03:36.273536   17265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:03:36 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:03:36 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:03:36 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 820.
	Dec 12 20:03:36 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:36 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:03:37 functional-468800 kubelet[17363]: E1212 20:03:37.023752   17363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:03:37 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:03:37 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (587.1806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (373.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-468800 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-468800 get po -A: exit status 1 (50.38103s)

                                                
                                                
** stderr ** 
	E1212 20:03:48.861602    1756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:03:58.905755    1756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:04:08.951884    1756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:04:18.992243    1756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:04:29.034233    1756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-468800 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1212 20:03:48.861602    1756 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55778/api?timeout=32s\\\": EOF\"\nE1212 20:03:58.905755    1756 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55778/api?timeout=32s\\\": EOF\"\nE1212 20:04:08.951884    1756 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55778/api?timeout=32s\\\": EOF\"\nE1212 20:04:18.992243    1756 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55778/api?timeout=32s\\\": EOF\"\nE1212 20:04:29.034233    1756 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55778/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-468800 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-468800 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (575.3836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.1637635s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-461000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ image          │ functional-461000 image save --daemon kicbase/echo-server:functional-461000 --alsologtostderr                           │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:42 UTC │ 12 Dec 25 19:42 UTC │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ start          │ -p functional-461000 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list                                                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ dashboard      │ --url --port 36195 -p functional-461000 --alsologtostderr -v=1                                                          │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service list -o json                                                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service --namespace=default --https --url hello-node                                                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ update-context │ functional-461000 update-context --alsologtostderr -v=2                                                                 │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format yaml --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ ssh            │ functional-461000 ssh pgrep buildkitd                                                                                   │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ image          │ functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image          │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service        │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service        │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete         │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start          │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start          │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:57:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:57:24.956785    8792 out.go:360] Setting OutFile to fd 1808 ...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:24.998786    8792 out.go:374] Setting ErrFile to fd 1700...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:25.011786    8792 out.go:368] Setting JSON to false
	I1212 19:57:25.013782    8792 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3583,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:57:25.013782    8792 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:57:25.016780    8792 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:57:25.020780    8792 notify.go:221] Checking for updates...
	I1212 19:57:25.022780    8792 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:25.024782    8792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:25.027780    8792 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:57:25.030779    8792 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:57:25.034782    8792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:25.037790    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:25.037790    8792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:57:25.155476    8792 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:57:25.159985    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.387868    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.372369133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.391884    8792 out.go:179] * Using the docker driver based on existing profile
	I1212 19:57:25.396868    8792 start.go:309] selected driver: docker
	I1212 19:57:25.396868    8792 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.396868    8792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:25.402871    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.622678    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.606400505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.701623    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:25.701623    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:25.701623    8792 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.706631    8792 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:57:25.708636    8792 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:57:25.711883    8792 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:57:25.714043    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:25.714043    8792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:57:25.714043    8792 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:57:25.714043    8792 cache.go:65] Caching tarball of preloaded images
	I1212 19:57:25.714043    8792 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:25.714043    8792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:57:25.714043    8792 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:57:25.792275    8792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:57:25.792275    8792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:57:25.792275    8792 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:57:25.792275    8792 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:25.792275    8792 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 19:57:25.792275    8792 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:57:25.792275    8792 fix.go:54] fixHost starting: 
	I1212 19:57:25.799955    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:25.853025    8792 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 19:57:25.853025    8792 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:57:25.856025    8792 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 19:57:25.856025    8792 machine.go:94] provisionDockerMachine start ...
	I1212 19:57:25.859025    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:25.918375    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:25.918479    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:25.918479    8792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:57:26.103358    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.103411    8792 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:57:26.107534    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.162431    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.162900    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.163030    8792 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:57:26.366993    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.370927    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.421027    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.422025    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.422025    8792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:26.592472    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:26.592472    8792 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:57:26.592472    8792 ubuntu.go:190] setting up certificates
	I1212 19:57:26.592472    8792 provision.go:84] configureAuth start
	I1212 19:57:26.596494    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:26.648327    8792 provision.go:143] copyHostCerts
	I1212 19:57:26.648492    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:57:26.648569    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:57:26.649807    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:57:26.649946    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:57:26.650879    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:57:26.650879    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:57:26.651440    8792 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:57:26.782013    8792 provision.go:177] copyRemoteCerts
	I1212 19:57:26.785479    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:26.788240    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.842524    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:26.968619    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 19:57:26.968964    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:57:26.995759    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 19:57:26.995759    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:57:27.024847    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 19:57:27.024847    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:57:27.057221    8792 provision.go:87] duration metric: took 464.7444ms to configureAuth
	I1212 19:57:27.057221    8792 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:57:27.057221    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:27.061251    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.121889    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.122548    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.122604    8792 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:57:27.313910    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:57:27.313910    8792 ubuntu.go:71] root file system type: overlay
	I1212 19:57:27.313910    8792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:57:27.317488    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.376486    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.377052    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.377052    8792 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:57:27.577536    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:57:27.581688    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.635455    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.635931    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.635954    8792 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:57:27.828516    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:27.828574    8792 machine.go:97] duration metric: took 1.9725293s to provisionDockerMachine
	I1212 19:57:27.828619    8792 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:57:27.828619    8792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:27.833127    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:27.836440    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.891552    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.022421    8792 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:28.031829    8792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_ID="12"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:57:28.031829    8792 command_runner.go:130] > ID=debian
	I1212 19:57:28.031829    8792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:57:28.031829    8792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:57:28.031829    8792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:57:28.031829    8792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:57:28.031829    8792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:57:28.031829    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:57:28.032546    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:57:28.033148    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:57:28.033204    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /etc/ssl/certs/133962.pem
	I1212 19:57:28.033277    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:57:28.033277    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> /etc/test/nested/copy/13396/hosts
	I1212 19:57:28.037935    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:57:28.050821    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:57:28.081156    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:57:28.109846    8792 start.go:296] duration metric: took 281.2243ms for postStartSetup
	I1212 19:57:28.115818    8792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:28.118674    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.171853    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.302700    8792 command_runner.go:130] > 1%
	I1212 19:57:28.308193    8792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:57:28.316146    8792 command_runner.go:130] > 950G
	I1212 19:57:28.316204    8792 fix.go:56] duration metric: took 2.5239035s for fixHost
	I1212 19:57:28.316204    8792 start.go:83] releasing machines lock for "functional-468800", held for 2.5239035s
	I1212 19:57:28.320187    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:28.373764    8792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:57:28.378728    8792 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:28.378728    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.382043    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.432252    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.433503    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.550849    8792 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1212 19:57:28.550961    8792 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:57:28.550961    8792 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:57:28.556187    8792 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:28.565686    8792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:57:28.565686    8792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:57:28.570074    8792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:57:28.577782    8792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:57:28.578775    8792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:28.583114    8792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:28.595283    8792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:57:28.595283    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:28.595283    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:28.595283    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:28.617880    8792 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:57:28.622700    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:57:28.640953    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:57:28.655059    8792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:57:28.659503    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 19:57:28.659726    8792 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:57:28.659726    8792 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:57:28.678759    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.696413    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:57:28.715842    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.736528    8792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:28.755951    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:57:28.776240    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:57:28.795721    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:57:28.815051    8792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:28.829778    8792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:57:28.834204    8792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:28.852899    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:28.995620    8792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:57:29.167559    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:29.167559    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:29.172911    8792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Unit]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 19:57:29.191693    8792 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 19:57:29.191693    8792 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1212 19:57:29.191693    8792 command_runner.go:130] > Wants=network-online.target containerd.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > Requires=docker.socket
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitBurst=3
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Service]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Type=notify
	I1212 19:57:29.191693    8792 command_runner.go:130] > Restart=always
	I1212 19:57:29.191693    8792 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 19:57:29.191693    8792 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 19:57:29.191693    8792 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 19:57:29.191693    8792 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 19:57:29.191693    8792 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 19:57:29.191693    8792 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 19:57:29.191693    8792 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 19:57:29.191693    8792 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNOFILE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNPROC=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitCORE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 19:57:29.191693    8792 command_runner.go:130] > TasksMax=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > TimeoutStartSec=0
	I1212 19:57:29.191693    8792 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 19:57:29.191693    8792 command_runner.go:130] > Delegate=yes
	I1212 19:57:29.191693    8792 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 19:57:29.191693    8792 command_runner.go:130] > KillMode=process
	I1212 19:57:29.191693    8792 command_runner.go:130] > OOMScoreAdjust=-500
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Install]
	I1212 19:57:29.191693    8792 command_runner.go:130] > WantedBy=multi-user.target
	I1212 19:57:29.196788    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.221924    8792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:29.312337    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.337554    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:57:29.357559    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:29.379522    8792 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 19:57:29.384213    8792 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:57:29.390808    8792 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 19:57:29.396438    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:57:29.409074    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:57:29.434191    8792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:57:29.578871    8792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:57:29.719341    8792 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:57:29.719341    8792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:57:29.746173    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:57:29.768870    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:29.905737    8792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:57:30.757640    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:30.780953    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:57:30.802218    8792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 19:57:30.829184    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:30.853409    8792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:57:30.994012    8792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:57:31.134627    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.283484    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:57:31.309618    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:57:31.333897    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.475108    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:57:31.578219    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:31.597007    8792 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:57:31.600988    8792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:57:31.610316    8792 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 19:57:31.611281    8792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:57:31.611281    8792 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Modify: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Change: 2025-12-12 19:57:31.484639595 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] >  Birth: -
	I1212 19:57:31.611281    8792 start.go:564] Will wait 60s for crictl version
	I1212 19:57:31.615844    8792 ssh_runner.go:195] Run: which crictl
	I1212 19:57:31.621876    8792 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:57:31.626999    8792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:57:31.672687    8792 command_runner.go:130] > Version:  0.1.0
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeName:  docker
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:57:31.672790    8792 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:57:31.676132    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.713311    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.716489    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.755737    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.761482    8792 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:57:31.765357    8792 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:57:31.901903    8792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:57:31.906530    8792 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:57:31.913687    8792 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1212 19:57:31.917320    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:31.973317    8792 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:57:31.973590    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:31.977450    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.013673    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.013673    8792 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:57:32.017349    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.047537    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.047537    8792 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:57:32.047537    8792 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:57:32.048190    8792 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:57:32.051146    8792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:57:32.121447    8792 command_runner.go:130] > cgroupfs
	I1212 19:57:32.121447    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:32.121447    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:32.121447    8792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:32.121964    8792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:32.122106    8792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:32.126035    8792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:57:32.138764    8792 command_runner.go:130] > kubeadm
	I1212 19:57:32.138798    8792 command_runner.go:130] > kubectl
	I1212 19:57:32.138825    8792 command_runner.go:130] > kubelet
	I1212 19:57:32.138845    8792 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:57:32.143533    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:32.155602    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:57:32.179900    8792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:57:32.199342    8792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:57:32.222871    8792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:32.229151    8792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:57:32.234589    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:32.373967    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:32.974236    8792 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:57:32.974236    8792 certs.go:195] generating shared ca certs ...
	I1212 19:57:32.974236    8792 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:57:32.975214    8792 certs.go:257] generating profile certs ...
	I1212 19:57:32.976191    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:57:32.976561    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:57:32.976892    8792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:57:32.977527    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:57:32.977863    8792 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:57:32.978401    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:57:32.978646    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:57:32.979304    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem -> /usr/share/ca-certificates/13396.pem
	I1212 19:57:32.979449    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /usr/share/ca-certificates/133962.pem
	I1212 19:57:32.979529    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:32.980729    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:33.008686    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:57:33.035660    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:33.063247    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:33.108547    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:57:33.138500    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:57:33.165883    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:33.195246    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:57:33.221022    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:57:33.248791    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:57:33.274438    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:33.302337    8792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:33.324312    8792 ssh_runner.go:195] Run: openssl version
	I1212 19:57:33.335263    8792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:57:33.339948    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.356389    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:57:33.375441    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.387660    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.430281    8792 command_runner.go:130] > 51391683
	I1212 19:57:33.435287    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:57:33.452481    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.471523    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:57:33.489874    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.502698    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.544550    8792 command_runner.go:130] > 3ec20f2e
	I1212 19:57:33.549548    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:57:33.566747    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.583990    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:57:33.600438    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.614484    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.657826    8792 command_runner.go:130] > b5213941
	I1212 19:57:33.662138    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:57:33.678498    8792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:57:33.685111    8792 command_runner.go:130] > Device: 8,48	Inode: 15292       Links: 1
	I1212 19:57:33.685111    8792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:57:33.685797    8792 command_runner.go:130] > Access: 2025-12-12 19:53:20.728281925 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Modify: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Change: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] >  Birth: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.689949    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:57:33.733144    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.737823    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:57:33.780151    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.785054    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:57:33.827773    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.833292    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:57:33.875401    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.880293    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:57:33.922924    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.927940    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:57:33.970239    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.970239    8792 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:33.976672    8792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:57:34.008252    8792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:34.020977    8792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:57:34.021108    8792 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:57:34.021108    8792 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:57:34.025234    8792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:57:34.045139    8792 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:34.049590    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.107138    8792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.107889    8792 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-468800" cluster setting kubeconfig missing "functional-468800" context setting]
	I1212 19:57:34.107889    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.126355    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.126843    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.128169    8792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:57:34.128230    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.128230    8792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:57:34.132435    8792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:57:34.149951    8792 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 19:57:34.150008    8792 kubeadm.go:602] duration metric: took 128.8994ms to restartPrimaryControlPlane
	I1212 19:57:34.150032    8792 kubeadm.go:403] duration metric: took 179.7913ms to StartCluster
	I1212 19:57:34.150032    8792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.150032    8792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.151180    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.152111    8792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:57:34.152111    8792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:57:34.152386    8792 addons.go:70] Setting storage-provisioner=true in profile "functional-468800"
	I1212 19:57:34.152386    8792 addons.go:70] Setting default-storageclass=true in profile "functional-468800"
	I1212 19:57:34.152426    8792 addons.go:239] Setting addon storage-provisioner=true in "functional-468800"
	I1212 19:57:34.152475    8792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-468800"
	I1212 19:57:34.152564    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.152599    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:34.155555    8792 out.go:179] * Verifying Kubernetes components...
	I1212 19:57:34.161161    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.161613    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.163072    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:34.221534    8792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:34.221534    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.221534    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.222943    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.223481    8792 addons.go:239] Setting addon default-storageclass=true in "functional-468800"
	I1212 19:57:34.223558    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.223558    8792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.223558    8792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:57:34.227691    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.230256    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.287093    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.289848    8792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.289848    8792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:57:34.293811    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.345554    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:34.348560    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.426758    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.480013    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.480104    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.534162    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.538400    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538479    8792 retry.go:31] will retry after 344.600735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538532    8792 node_ready.go:35] waiting up to 6m0s for node "functional-468800" to be "Ready" ...
	I1212 19:57:34.539394    8792 type.go:168] "Request Body" body=""
	I1212 19:57:34.539597    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:34.541949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:34.608531    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.613599    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.613599    8792 retry.go:31] will retry after 216.683996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.835959    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.887701    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.908576    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.913475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.913475    8792 retry.go:31] will retry after 230.473341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.961197    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.966061    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.966061    8792 retry.go:31] will retry after 349.771822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.150121    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.221040    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.228247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.228333    8792 retry.go:31] will retry after 512.778483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.321063    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.394131    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.397148    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.397148    8792 retry.go:31] will retry after 487.352123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.542707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:35.542707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:35.545160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:35.747496    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.819613    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.822659    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.822659    8792 retry.go:31] will retry after 1.154413243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.890743    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.965246    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.972460    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.972460    8792 retry.go:31] will retry after 1.245938436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:36.545730    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:36.545730    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:36.549771    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:36.983387    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:37.090901    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.094847    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.094847    8792 retry.go:31] will retry after 1.548342934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.223991    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:37.295689    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.299705    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.299769    8792 retry.go:31] will retry after 1.579528606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.551013    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:37.551013    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:37.554154    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:38.554939    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:38.555432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:38.558234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:38.649390    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:38.725500    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.729499    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.729499    8792 retry.go:31] will retry after 2.648471583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.884600    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:38.953302    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.958318    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.958318    8792 retry.go:31] will retry after 2.058418403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:39.559077    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:39.559356    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:39.562225    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:40.562954    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:40.563393    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:40.566347    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:41.022091    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:41.102318    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.106247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.106247    8792 retry.go:31] will retry after 3.080320353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.384408    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:41.470520    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.473795    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.473795    8792 retry.go:31] will retry after 2.343057986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.566604    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:41.566604    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:41.569639    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:42.569950    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:42.569950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:42.573153    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:43.573545    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:43.573545    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:43.577655    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:43.821674    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:43.897847    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:43.901846    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:43.901846    8792 retry.go:31] will retry after 5.566518346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.193277    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:44.263403    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:44.269459    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.269459    8792 retry.go:31] will retry after 4.550082482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.577835    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:44.577835    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.580876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:44.581034    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:44.581158    8792 type.go:168] "Request Body" body=""
	I1212 19:57:44.581244    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.583508    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:45.583961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:45.583961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:45.587161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:46.587855    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:46.588199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:46.590728    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:47.591504    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:47.591504    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:47.594168    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:48.595392    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:48.595392    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:48.601208    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:57:48.824534    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:48.903714    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:48.909283    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:48.909283    8792 retry.go:31] will retry after 5.408295828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.475338    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:49.554836    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:49.559515    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.559515    8792 retry.go:31] will retry after 7.920709676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.602224    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:49.602480    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:49.605147    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:50.605575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:50.605575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:50.609094    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:51.610210    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:51.610210    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:51.613279    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:52.613438    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:52.613438    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:52.617857    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:53.618444    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:53.618444    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:53.622009    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:54.323567    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:54.399774    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:54.402767    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.402767    8792 retry.go:31] will retry after 5.650885129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.622233    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:54.622233    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.625806    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:54.625833    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:54.625833    8792 type.go:168] "Request Body" body=""
	I1212 19:57:54.625833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.628220    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:55.628567    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:55.628567    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:55.632067    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:56.632335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:56.632737    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:56.635417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:57.485659    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:57.566715    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:57.570725    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.570725    8792 retry.go:31] will retry after 5.889801353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.635601    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:57.636162    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:57.638437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:58.639201    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:58.639201    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:58.641202    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:59.642751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:59.642751    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:59.645820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:00.059077    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:00.141196    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:00.144743    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.144828    8792 retry.go:31] will retry after 12.880427161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.646278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:00.646278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:00.648514    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:01.648554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:01.648554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:01.652477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:02.652719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:02.652719    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:02.656865    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:03.466574    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:03.546687    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:03.552160    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.552160    8792 retry.go:31] will retry after 8.684375444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.657068    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:03.657068    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:03.660376    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:04.660836    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:04.661165    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.664417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:04.664489    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:04.664634    8792 type.go:168] "Request Body" body=""
	I1212 19:58:04.664723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.667029    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:05.667419    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:05.667419    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:05.670032    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:06.670984    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:06.670984    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:06.674354    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:07.675175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:07.675473    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:07.678161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:08.679000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:08.679000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:08.682498    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:09.683536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:09.684039    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:09.686703    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:10.687176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:10.687514    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:10.691708    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:11.692097    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:11.692097    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:11.695419    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:12.243184    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.329214    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:12.335592    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.335592    8792 retry.go:31] will retry after 19.078221738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.695735    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:12.695735    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:12.698564    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:13.030727    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:13.107677    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:13.111475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.111475    8792 retry.go:31] will retry after 24.078034123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.699329    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:13.699329    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:13.703201    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:14.703632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:14.703632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.706632    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:14.706632    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:14.706632    8792 type.go:168] "Request Body" body=""
	I1212 19:58:14.706632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.709461    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:15.709987    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:15.709987    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:15.713881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:16.714426    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:16.714947    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:16.717509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:17.718027    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:17.718027    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:17.721452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:18.721719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:18.722180    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:18.725521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:19.726174    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:19.726174    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:19.731274    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:20.731838    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:20.731838    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:20.735774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:21.736083    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:21.736083    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:21.739364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:22.740462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:22.740462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:22.743494    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:23.744218    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:23.744882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:23.747961    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:24.748401    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:24.748401    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.752939    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:58:24.752939    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:24.752939    8792 type.go:168] "Request Body" body=""
	I1212 19:58:24.752939    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.756295    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:25.756593    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:25.756959    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:25.759330    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:26.760825    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:26.760825    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:26.765414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:27.765653    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:27.765653    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:27.769152    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:28.770176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:28.770595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:28.774341    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:29.774498    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:29.774498    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:29.777488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:30.778437    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:30.778437    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:30.781414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:31.419403    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:31.498102    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:31.502554    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.502554    8792 retry.go:31] will retry after 21.655222228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.781482    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:31.781482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:31.783476    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:32.785130    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:32.785130    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:32.787452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:33.788547    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:33.788547    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:33.791489    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:34.792428    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:34.792428    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.794457    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:34.794457    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:34.794457    8792 type.go:168] "Request Body" body=""
	I1212 19:58:34.794457    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.796423    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:35.796926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:35.796926    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:35.800403    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:36.800694    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:36.800694    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:36.803902    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:37.195194    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:37.275035    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:37.278655    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.278655    8792 retry.go:31] will retry after 33.639329095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.804194    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:37.804194    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:37.807496    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:38.808801    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:38.808801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:38.811801    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:39.812262    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:39.812262    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:39.815469    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:40.816141    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:40.816141    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:40.819310    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:41.819973    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:41.819973    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:41.823039    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:42.824053    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:42.824053    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:42.827675    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:43.828345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:43.828345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:43.830350    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:44.830883    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:44.830883    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.834425    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:44.834502    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:44.834607    8792 type.go:168] "Request Body" body=""
	I1212 19:58:44.834703    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.836790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:45.837202    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:45.837202    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:45.840615    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:46.840700    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:46.840700    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:46.843992    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:47.844334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:47.844334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:47.847669    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:48.848509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:48.848509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:48.851509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:49.852471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:49.852471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:49.855417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:50.855889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:50.855889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:50.858888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:51.859324    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:51.859324    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:51.862752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:52.863764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:52.863764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:52.867051    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:53.163493    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:53.239799    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245721    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245920    8792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:58:53.867924    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:53.867924    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:53.871211    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:54.872502    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:54.872502    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.875103    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:54.875103    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:54.875635    8792 type.go:168] "Request Body" body=""
	I1212 19:58:54.875635    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.878074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:55.878391    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:55.878391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:55.881700    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:56.882314    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:56.882731    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:56.885332    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:57.886661    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:57.886661    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:57.890321    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:58.891069    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:58.891069    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:58.894045    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:59.894455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:59.894455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:59.897144    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:00.897724    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:00.897724    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:00.900925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:01.901327    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:01.901327    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:01.904820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:02.905377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:02.905668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:02.908844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:03.909357    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:03.909357    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:03.912567    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:04.913190    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:04.913190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.916248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:04.916248    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:04.916248    8792 type.go:168] "Request Body" body=""
	I1212 19:59:04.916248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.918608    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:05.918787    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:05.919084    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:05.921580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:06.921873    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:06.921873    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:06.925988    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:07.927045    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:07.927045    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:07.930359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:08.930575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:08.930575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:08.934014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:09.935175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:09.935175    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:09.939760    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:10.923536    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:59:10.940298    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:10.940298    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:10.942578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:11.011286    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:59:11.015418    8792 out.go:179] * Enabled addons: 
	I1212 19:59:11.018366    8792 addons.go:530] duration metric: took 1m36.8652549s for enable addons: enabled=[]
	I1212 19:59:11.943695    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:11.943695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:11.946524    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:12.947004    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:12.947004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:12.950107    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:13.950403    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:13.950403    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:13.953492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:14.953762    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:14.953762    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.957001    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:14.957153    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:14.957292    8792 type.go:168] "Request Body" body=""
	I1212 19:59:14.957344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.959399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:15.959732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:15.959732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:15.963481    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:16.964631    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:16.964631    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:16.967431    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:17.968335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:17.968716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:17.971422    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:18.975421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:18.975482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:18.981353    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:19.982483    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:19.982483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:19.986458    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:20.986878    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:20.986878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:20.990580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:21.991705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:21.991705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:21.994313    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:22.994828    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:22.994828    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:22.998384    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:23.999291    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:23.999572    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:24.001757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:25.002197    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:25.002197    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.006076    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:25.006076    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:25.006076    8792 type.go:168] "Request Body" body=""
	I1212 19:59:25.006076    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.008833    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:26.009236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:26.009483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:26.013280    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:27.013991    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:27.013991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:27.017339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:28.017861    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:28.017861    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:28.020302    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:29.021278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:29.021278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:29.024910    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:30.025134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:30.025134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:30.028490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:31.029228    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:31.029228    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:31.032192    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:32.033358    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:32.033358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:32.037022    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:33.037052    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:33.037052    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:33.039997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:34.040974    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:34.040974    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:34.044336    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:35.045158    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:35.045158    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.050424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:35.050478    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:35.050634    8792 type.go:168] "Request Body" body=""
	I1212 19:59:35.050710    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.053272    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:36.053659    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:36.053659    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:36.056921    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:37.057862    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:37.057983    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:37.061055    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:38.061705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:38.061705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:38.064401    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:39.065070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:39.065070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:39.070212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:40.070745    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:40.070745    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:40.074056    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:41.074238    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:41.074238    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:41.077817    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:42.078786    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:42.078786    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:42.082102    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:43.082439    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:43.082849    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:43.086074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:44.086257    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:44.086257    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:44.089158    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:45.089746    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:45.089746    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.093004    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:45.093004    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:45.093004    8792 type.go:168] "Request Body" body=""
	I1212 19:59:45.093004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.096683    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:46.097116    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:46.097615    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:46.100214    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:47.101361    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:47.101361    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:47.104657    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:48.104994    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:48.104994    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:48.108049    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:49.109535    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:49.109535    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:49.112664    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:50.113614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:50.113614    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:50.117411    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:51.117709    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:51.117709    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:51.121291    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:52.121914    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:52.122224    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:52.125068    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:53.125697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:53.126105    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:53.129084    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:54.129467    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:54.129467    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:54.133149    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:55.133722    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:55.133722    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.139098    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:55.139630    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:55.139774    8792 type.go:168] "Request Body" body=""
	I1212 19:59:55.139830    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.142212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:56.142471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:56.142471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:56.145561    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:57.146754    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:57.146754    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:57.150691    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:58.151315    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:58.151315    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:58.153802    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:59.154632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:59.154632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:59.157895    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:00.158286    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:00.158286    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:00.161521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:01.161851    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:01.161851    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:01.165478    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:02.166140    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:02.166140    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:02.169015    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:03.169549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:03.169549    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:03.179028    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	I1212 20:00:04.179254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:04.179632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:04.182303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:05.183057    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:05.183057    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.186169    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:05.186202    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:05.186368    8792 type.go:168] "Request Body" body=""
	I1212 20:00:05.186427    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.188490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:06.189369    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:06.189369    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:06.191767    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:07.192287    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:07.192287    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:07.195873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:08.196564    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:08.196564    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:08.200301    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:09.200652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:09.201050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:09.203873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:10.204621    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:10.204621    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:10.207991    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:11.208169    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:11.208695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:11.211546    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:12.212265    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:12.212265    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:12.215652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:13.216481    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:13.216481    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:13.218808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:14.219114    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:14.219114    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:14.222371    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:15.223587    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:15.223882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.226696    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:15.226696    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:15.226696    8792 type.go:168] "Request Body" body=""
	I1212 20:00:15.227288    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.230014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:16.230255    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:16.230702    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:16.234073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:17.234537    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:17.234537    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:17.238981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:18.240162    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:18.240450    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:18.242671    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:19.244029    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:19.244029    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:19.247551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:20.248288    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:20.248689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:20.251486    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:21.252448    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:21.252448    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:21.255871    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:22.256129    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:22.256129    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:22.259292    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:23.259853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:23.260152    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:23.263166    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:24.264181    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:24.264523    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:24.267309    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:25.267655    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:25.267655    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.270583    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:25.270681    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:25.270716    8792 type.go:168] "Request Body" body=""
	I1212 20:00:25.270716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.272780    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:26.273236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:26.273236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:26.276531    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:27.277612    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:27.277612    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:27.280399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:28.280976    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:28.281348    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:28.284050    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:29.284889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:29.284889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:29.288318    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:30.289605    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:30.289605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:30.292210    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:31.292623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:31.292623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:31.296173    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:32.297272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:32.297272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:32.300365    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:33.300747    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:33.300747    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:33.304627    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:34.305148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:34.305148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:34.307286    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:35.308221    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:35.308221    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.311525    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:35.311525    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:35.311525    8792 type.go:168] "Request Body" body=""
	I1212 20:00:35.311525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.314768    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:36.315303    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:36.315803    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:36.319885    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:37.320651    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:37.320651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:37.323804    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:38.324633    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:38.324633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:38.327596    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:39.328167    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:39.328827    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:39.332387    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:40.335388    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:40.335388    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:40.341222    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:00:41.342293    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:41.342293    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:41.346503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:42.346733    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:42.347391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:42.349901    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:43.350351    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:43.350351    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:43.353790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:44.354356    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:44.354951    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:44.357421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:45.357936    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:45.358254    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.361424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:45.361488    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:45.361558    8792 type.go:168] "Request Body" body=""
	I1212 20:00:45.361734    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.364678    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:46.364915    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:46.364915    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:46.368243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:47.368380    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:47.368380    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:47.371842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:48.372123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:48.372496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:48.375782    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:49.376328    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:49.376328    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:49.379339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:50.379689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:50.380090    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:50.383968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:51.384253    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:51.384253    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:51.387625    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:52.388421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:52.388421    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:52.391331    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:53.392103    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:53.392524    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:53.395936    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:54.396522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:54.396914    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:54.399312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:55.399853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:55.399853    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.404011    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:00:55.404054    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:55.404190    8792 type.go:168] "Request Body" body=""
	I1212 20:00:55.404190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.406466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:56.406717    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:56.406717    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:56.409652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:57.409829    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:57.409829    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:57.413808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:58.414272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:58.414272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:58.416891    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:59.418094    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:59.418094    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:59.422379    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:00.422928    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:00.423211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:00.425511    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:01.426949    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:01.427372    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:01.429940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:02.430697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:02.430894    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:02.434142    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:03.434554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:03.434554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:03.438125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:04.438646    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:04.438646    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:04.441873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:05.442580    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:05.443007    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.445227    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:05.445288    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:05.445349    8792 type.go:168] "Request Body" body=""
	I1212 20:01:05.445349    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.447160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 20:01:06.448042    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:06.448299    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:06.451364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:07.451519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:07.451519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:07.454072    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:08.455225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:08.455581    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:08.458949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:09.459239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:09.459483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:09.462124    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:10.462488    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:10.462488    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:10.465073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:11.466146    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:11.466334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:11.468858    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:12.469556    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:12.469556    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:12.472263    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:13.473070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:13.473070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:13.476554    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:14.476996    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:14.477386    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:14.479751    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:15.480652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:15.480652    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.484243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:15.484268    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:15.484379    8792 type.go:168] "Request Body" body=""
	I1212 20:01:15.484379    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.486997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:16.487837    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:16.487837    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:16.491073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:17.491865    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:17.492218    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:17.495307    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:18.495909    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:18.495909    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:18.499046    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:19.499542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:19.499542    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:19.502844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:20.503664    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:20.503664    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:20.506838    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:21.507123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:21.507496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:21.510126    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:22.510522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:22.510522    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:22.513442    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:23.514259    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:23.514259    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:23.516261    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:24.517279    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:24.517279    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:24.520541    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:25.521455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:25.521455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.524551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:25.524625    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:25.524657    8792 type.go:168] "Request Body" body=""
	I1212 20:01:25.524657    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.527752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:26.528360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:26.528723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:26.532917    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:27.533242    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:27.533242    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:27.537366    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:28.538106    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:28.538495    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:28.543549    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:01:29.544680    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:29.544680    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:29.548232    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:30.548450    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:30.548850    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:30.552101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:31.552352    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:31.552352    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:31.556248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:32.556689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:32.556689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:32.560889    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:33.561227    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:33.561227    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:33.565100    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:34.566919    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:34.566919    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:34.573248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1212 20:01:35.574024    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:35.574411    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.577335    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:35.577335    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:35.577335    8792 type.go:168] "Request Body" body=""
	I1212 20:01:35.577335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.579846    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:36.580067    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:36.580067    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:36.582937    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:37.583614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:37.584133    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:37.588041    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:38.588334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:38.588334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:38.590836    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:39.591771    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:39.592199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:39.596300    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:40.596570    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:40.596570    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:40.599738    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:41.600585    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:41.600964    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:41.603618    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:42.604326    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:42.604326    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:42.607888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:43.608118    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:43.608432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:43.611303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:44.612148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:44.612148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:44.615841    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:45.616729    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:45.616729    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.619383    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:45.619383    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:45.619913    8792 type.go:168] "Request Body" body=""
	I1212 20:01:45.619962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.624234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:46.624440    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:46.624440    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:46.631606    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1212 20:01:47.631772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:47.631772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:47.634254    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:48.635335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:48.635335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:48.638393    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:49.638538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:49.638538    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:49.642244    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:50.643486    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:50.643486    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:50.646864    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:51.647407    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:51.648062    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:51.651297    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:52.652310    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:52.652310    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:52.656003    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:53.657050    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:53.657050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:53.660358    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:54.661093    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:54.661093    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:54.664217    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:55.665772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:55.665772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.669789    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:01:55.669789    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:55.669789    8792 type.go:168] "Request Body" body=""
	I1212 20:01:55.669789    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.672845    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:56.673184    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:56.673578    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:56.676091    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:57.677260    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:57.677260    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:57.680492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:58.680999    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:58.681801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:58.684437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:59.685343    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:59.685343    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:59.688492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:00.689226    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:00.689226    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:00.692407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:01.693054    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:01.693054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:01.696414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:02.696707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:02.696707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:02.700656    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:03.701360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:03.701764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:03.704532    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:04.705055    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:04.705395    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:04.709582    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:05.709819    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:05.709819    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.712925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:05.712925    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:05.712925    8792 type.go:168] "Request Body" body=""
	I1212 20:02:05.712925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.714981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:06.715647    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:06.715989    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:06.718856    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:07.719549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:07.719950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:07.723017    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:08.723622    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:08.723991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:08.726824    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:09.727519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:09.727519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:09.731398    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:10.731940    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:10.732255    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:10.735314    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:11.736266    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:11.736266    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:11.739684    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:12.740926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:12.741346    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:12.744101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:13.745071    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:13.745071    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:13.749298    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:14.749764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:14.749764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:14.753277    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:15.753345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:15.753345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.755998    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:02:15.756520    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:15.756618    8792 type.go:168] "Request Body" body=""
	I1212 20:02:15.756676    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.758786    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:16.759785    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:16.759785    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:16.763359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:17.763591    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:17.763591    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:17.767014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:18.767248    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:18.767248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:18.770795    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:19.770962    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:19.770962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:19.773337    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:20.774557    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:20.774557    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:20.777421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:21.778527    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:21.778968    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:21.782312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:22.783001    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:22.783358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:22.785874    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:23.786668    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:23.786668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:23.789637    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:24.790000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:24.790000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:24.793439    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:25.793897    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:25.793897    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.797842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:25.797972    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:25.797972    8792 type.go:168] "Request Body" body=""
	I1212 20:02:25.797972    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.800999    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:26.801297    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:26.801297    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:26.804559    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:27.805028    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:27.805383    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:27.808770    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:28.809311    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:28.809864    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:28.812697    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:29.812980    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:29.812980    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:29.816569    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:30.816822    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:30.816822    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:30.819812    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:31.820344    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:31.820344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:31.824040    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:32.825223    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:32.825223    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:32.828636    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:33.828922    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:33.828922    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:33.833012    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:34.834105    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:34.834781    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:34.837739    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:35.838239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:35.839054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.842296    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:35.842377    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:35.842447    8792 type.go:168] "Request Body" body=""
	I1212 20:02:35.842525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.845253    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:36.845542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:36.845878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:36.849197    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:37.849575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:37.849575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:37.852774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:38.853254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:38.853925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:38.857020    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:39.857636    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:39.857636    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:39.861466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:40.861880    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:40.862546    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:40.865734    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:41.866931    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:41.866931    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:41.870407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:42.871284    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:42.871284    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:42.875909    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:43.876145    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:43.876145    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:43.879252    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:44.879595    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:44.879595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:44.882581    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:45.882793    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:45.882793    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.886772    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:45.886823    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:45.886823    8792 type.go:168] "Request Body" body=""
	I1212 20:02:45.886823    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.889488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:46.889817    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:46.889817    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:46.892533    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:47.893171    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:47.893605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:47.897327    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:48.898243    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:48.898243    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:48.901190    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:49.901751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:49.902239    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:49.905447    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:50.905509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:50.905509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:50.908968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:51.909246    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:51.909595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:51.913571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:52.914178    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:52.914178    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:52.917630    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:53.918264    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:53.918264    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:53.921578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:54.921843    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:54.921843    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:54.925388    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:55.925667    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:55.925667    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.929367    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:55.929367    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:55.929367    8792 type.go:168] "Request Body" body=""
	I1212 20:02:55.929367    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.932191    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:56.932533    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:56.932533    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:56.936530    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:57.937538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:57.937902    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:57.940876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:58.941300    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:58.941300    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:58.944722    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:59.945325    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:59.945325    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:59.948320    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:00.948833    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:00.948833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:00.952416    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:01.953225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:01.953225    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:01.956654    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:02.956910    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:02.956910    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:02.959952    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:03.960484    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:03.961032    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:03.963951    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:04.965244    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:04.965633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:04.968258    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:05.968774    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:05.968774    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.971651    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:05.971651    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:05.971651    8792 type.go:168] "Request Body" body=""
	I1212 20:03:05.971651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.974027    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:06.974449    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:06.974741    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:06.977205    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:07.977634    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:07.977798    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:07.981006    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:08.982134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:08.982134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:08.985063    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:09.985961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:09.985961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:09.988609    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:10.988755    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:10.988755    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:10.991472    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:11.992370    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:11.992370    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:11.996488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:12.996868    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:12.997258    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:13.000762    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:14.001059    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:14.001059    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:14.004368    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:15.004777    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:15.004777    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:15.007757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:16.008339    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:16.008625    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.011236    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:16.011236    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:16.011236    8792 type.go:168] "Request Body" body=""
	I1212 20:03:16.011236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.013832    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:17.014609    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:17.014609    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:17.018477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:18.018689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:18.018689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:18.022881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:19.023377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:19.023377    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:19.027571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:20.028073    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:20.028073    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:20.031057    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:21.031744    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:21.032211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:21.035492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:22.036462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:22.036462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:22.038986    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:23.039813    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:23.040216    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:23.042835    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:24.043623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:24.043623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:24.047746    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:25.048465    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:25.048465    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:25.051125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:26.051732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:26.051732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.055363    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:03:26.055363    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:26.055363    8792 type.go:168] "Request Body" body=""
	I1212 20:03:26.055363    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.058940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:27.059108    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:27.059476    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:27.062503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:28.062870    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:28.062870    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:28.066764    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:29.067215    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:29.067215    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:29.069923    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:30.070845    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:30.070845    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:30.073412    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:31.074536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:31.074979    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:31.077758    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:32.078060    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:32.078060    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:32.082117    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:33.083505    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:33.083505    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:33.086255    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:34.087642    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:34.087642    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:34.090378    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:34.543368    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 20:03:34.543799    8792 node_ready.go:38] duration metric: took 6m0.000497s for node "functional-468800" to be "Ready" ...
	I1212 20:03:34.547199    8792 out.go:203] 
	W1212 20:03:34.550016    8792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:03:34.550016    8792 out.go:285] * 
	W1212 20:03:34.552052    8792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:03:34.555048    8792 out.go:203] 
	
	
	==> Docker <==
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644022398Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644029098Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644048100Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644083703Z" level=info msg="Initializing buildkit"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.744677695Z" level=info msg="Completed buildkit initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750002934Z" level=info msg="Daemon has completed initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750231253Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750252555Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 19:57:30 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750265456Z" level=info msg="API listen on [::]:2376"
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:30 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 19:57:31 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Loaded network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 19:57:31 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:04:30.694828   18380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:04:30.696058   18380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:04:30.698317   18380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:04:30.700820   18380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:04:30.701720   18380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000814] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000769] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000773] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 19:57] CPU: 0 PID: 53838 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000857] RIP: 0033:0x7ff47e100b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7ff47e100af6.
	[  +0.000659] RSP: 002b:00007ffe8b002070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000766] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001155] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001186] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001227] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001126] FS:  0000000000000000 GS:  0000000000000000
	[  +0.862009] CPU: 6 PID: 53976 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000896] RIP: 0033:0x7f0cd9433b20
	[  +0.000429] Code: Unable to access opcode bytes at RIP 0x7f0cd9433af6.
	[  +0.000694] RSP: 002b:00007fff41d09ce0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:04:30 up  1:06,  0 user,  load average: 0.25, 0.30, 0.57
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:04:27 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:04:27 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 888.
	Dec 12 20:04:27 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:27 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:28 functional-468800 kubelet[18225]: E1212 20:04:28.015946   18225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:04:28 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:04:28 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:04:28 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 889.
	Dec 12 20:04:28 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:28 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:28 functional-468800 kubelet[18237]: E1212 20:04:28.779085   18237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:04:28 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:04:28 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:04:29 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 890.
	Dec 12 20:04:29 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:29 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:29 functional-468800 kubelet[18251]: E1212 20:04:29.523233   18251 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:04:29 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:04:29 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:04:30 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 891.
	Dec 12 20:04:30 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:30 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:04:30 functional-468800 kubelet[18277]: E1212 20:04:30.215597   18277 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:04:30 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:04:30 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (589.9491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (53.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 kubectl -- --context functional-468800 get pods
E1212 20:04:54.771182   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 kubectl -- --context functional-468800 get pods: exit status 1 (50.5920604s)

                                                
                                                
** stderr ** 
	E1212 20:05:01.542082    8104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:05:11.629994    8104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:05:21.671758    8104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:05:31.712246    8104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:05:41.752733    8104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-468800 kubectl -- --context functional-468800 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (592.1666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.1636414s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete  │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start   │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start   │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:latest                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add minikube-local-cache-test:functional-468800                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache delete minikube-local-cache-test:functional-468800                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl images                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ cache   │ functional-468800 cache reload                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ kubectl │ functional-468800 kubectl -- --context functional-468800 get pods                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:57:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:57:24.956785    8792 out.go:360] Setting OutFile to fd 1808 ...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:24.998786    8792 out.go:374] Setting ErrFile to fd 1700...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:25.011786    8792 out.go:368] Setting JSON to false
	I1212 19:57:25.013782    8792 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3583,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:57:25.013782    8792 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:57:25.016780    8792 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:57:25.020780    8792 notify.go:221] Checking for updates...
	I1212 19:57:25.022780    8792 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:25.024782    8792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:25.027780    8792 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:57:25.030779    8792 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:57:25.034782    8792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:25.037790    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:25.037790    8792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:57:25.155476    8792 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:57:25.159985    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.387868    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.372369133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.391884    8792 out.go:179] * Using the docker driver based on existing profile
	I1212 19:57:25.396868    8792 start.go:309] selected driver: docker
	I1212 19:57:25.396868    8792 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.396868    8792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:25.402871    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.622678    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.606400505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.701623    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:25.701623    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:25.701623    8792 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.706631    8792 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:57:25.708636    8792 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:57:25.711883    8792 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:57:25.714043    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:25.714043    8792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:57:25.714043    8792 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:57:25.714043    8792 cache.go:65] Caching tarball of preloaded images
	I1212 19:57:25.714043    8792 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:25.714043    8792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:57:25.714043    8792 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:57:25.792275    8792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:57:25.792275    8792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:57:25.792275    8792 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:57:25.792275    8792 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:25.792275    8792 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 19:57:25.792275    8792 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:57:25.792275    8792 fix.go:54] fixHost starting: 
	I1212 19:57:25.799955    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:25.853025    8792 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 19:57:25.853025    8792 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:57:25.856025    8792 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 19:57:25.856025    8792 machine.go:94] provisionDockerMachine start ...
	I1212 19:57:25.859025    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:25.918375    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:25.918479    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:25.918479    8792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:57:26.103358    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.103411    8792 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:57:26.107534    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.162431    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.162900    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.163030    8792 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:57:26.366993    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.370927    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.421027    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.422025    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.422025    8792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:26.592472    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:26.592472    8792 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:57:26.592472    8792 ubuntu.go:190] setting up certificates
	I1212 19:57:26.592472    8792 provision.go:84] configureAuth start
	I1212 19:57:26.596494    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:26.648327    8792 provision.go:143] copyHostCerts
	I1212 19:57:26.648492    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:57:26.648569    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:57:26.649807    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:57:26.649946    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:57:26.650879    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:57:26.650879    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:57:26.651440    8792 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:57:26.782013    8792 provision.go:177] copyRemoteCerts
	I1212 19:57:26.785479    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:26.788240    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.842524    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:26.968619    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 19:57:26.968964    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:57:26.995759    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 19:57:26.995759    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:57:27.024847    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 19:57:27.024847    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:57:27.057221    8792 provision.go:87] duration metric: took 464.7444ms to configureAuth
	I1212 19:57:27.057221    8792 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:57:27.057221    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:27.061251    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.121889    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.122548    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.122604    8792 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:57:27.313910    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:57:27.313910    8792 ubuntu.go:71] root file system type: overlay
	I1212 19:57:27.313910    8792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:57:27.317488    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.376486    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.377052    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.377052    8792 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:57:27.577536    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:57:27.581688    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.635455    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.635931    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.635954    8792 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:57:27.828516    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:27.828574    8792 machine.go:97] duration metric: took 1.9725293s to provisionDockerMachine
	I1212 19:57:27.828619    8792 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:57:27.828619    8792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:27.833127    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:27.836440    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.891552    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.022421    8792 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:28.031829    8792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_ID="12"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:57:28.031829    8792 command_runner.go:130] > ID=debian
	I1212 19:57:28.031829    8792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:57:28.031829    8792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:57:28.031829    8792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:57:28.031829    8792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:57:28.031829    8792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:57:28.031829    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:57:28.032546    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:57:28.033148    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:57:28.033204    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /etc/ssl/certs/133962.pem
	I1212 19:57:28.033277    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:57:28.033277    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> /etc/test/nested/copy/13396/hosts
	I1212 19:57:28.037935    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:57:28.050821    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:57:28.081156    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:57:28.109846    8792 start.go:296] duration metric: took 281.2243ms for postStartSetup
	I1212 19:57:28.115818    8792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:28.118674    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.171853    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.302700    8792 command_runner.go:130] > 1%
	I1212 19:57:28.308193    8792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:57:28.316146    8792 command_runner.go:130] > 950G
	I1212 19:57:28.316204    8792 fix.go:56] duration metric: took 2.5239035s for fixHost
	I1212 19:57:28.316204    8792 start.go:83] releasing machines lock for "functional-468800", held for 2.5239035s
	I1212 19:57:28.320187    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:28.373764    8792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:57:28.378728    8792 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:28.378728    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.382043    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.432252    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.433503    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.550849    8792 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1212 19:57:28.550961    8792 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:57:28.550961    8792 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:57:28.556187    8792 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:28.565686    8792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:57:28.565686    8792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:57:28.570074    8792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:57:28.577782    8792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:57:28.578775    8792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:28.583114    8792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:28.595283    8792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:57:28.595283    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:28.595283    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:28.595283    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:28.617880    8792 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:57:28.622700    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:57:28.640953    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:57:28.655059    8792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:57:28.659503    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 19:57:28.659726    8792 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:57:28.659726    8792 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:57:28.678759    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.696413    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:57:28.715842    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.736528    8792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:28.755951    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:57:28.776240    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:57:28.795721    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:57:28.815051    8792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:28.829778    8792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:57:28.834204    8792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:28.852899    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:28.995620    8792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:57:29.167559    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:29.167559    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:29.172911    8792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Unit]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 19:57:29.191693    8792 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 19:57:29.191693    8792 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1212 19:57:29.191693    8792 command_runner.go:130] > Wants=network-online.target containerd.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > Requires=docker.socket
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitBurst=3
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Service]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Type=notify
	I1212 19:57:29.191693    8792 command_runner.go:130] > Restart=always
	I1212 19:57:29.191693    8792 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 19:57:29.191693    8792 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 19:57:29.191693    8792 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 19:57:29.191693    8792 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 19:57:29.191693    8792 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 19:57:29.191693    8792 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 19:57:29.191693    8792 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 19:57:29.191693    8792 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNOFILE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNPROC=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitCORE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 19:57:29.191693    8792 command_runner.go:130] > TasksMax=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > TimeoutStartSec=0
	I1212 19:57:29.191693    8792 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 19:57:29.191693    8792 command_runner.go:130] > Delegate=yes
	I1212 19:57:29.191693    8792 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 19:57:29.191693    8792 command_runner.go:130] > KillMode=process
	I1212 19:57:29.191693    8792 command_runner.go:130] > OOMScoreAdjust=-500
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Install]
	I1212 19:57:29.191693    8792 command_runner.go:130] > WantedBy=multi-user.target
	I1212 19:57:29.196788    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.221924    8792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:29.312337    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.337554    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:57:29.357559    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:29.379522    8792 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 19:57:29.384213    8792 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:57:29.390808    8792 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 19:57:29.396438    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:57:29.409074    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:57:29.434191    8792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:57:29.578871    8792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:57:29.719341    8792 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:57:29.719341    8792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:57:29.746173    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:57:29.768870    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:29.905737    8792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:57:30.757640    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:30.780953    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:57:30.802218    8792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 19:57:30.829184    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:30.853409    8792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:57:30.994012    8792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:57:31.134627    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.283484    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:57:31.309618    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:57:31.333897    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.475108    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:57:31.578219    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:31.597007    8792 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:57:31.600988    8792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:57:31.610316    8792 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 19:57:31.611281    8792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:57:31.611281    8792 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Modify: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Change: 2025-12-12 19:57:31.484639595 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] >  Birth: -
	I1212 19:57:31.611281    8792 start.go:564] Will wait 60s for crictl version
	I1212 19:57:31.615844    8792 ssh_runner.go:195] Run: which crictl
	I1212 19:57:31.621876    8792 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:57:31.626999    8792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:57:31.672687    8792 command_runner.go:130] > Version:  0.1.0
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeName:  docker
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:57:31.672790    8792 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:57:31.676132    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.713311    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.716489    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.755737    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.761482    8792 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:57:31.765357    8792 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:57:31.901903    8792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:57:31.906530    8792 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:57:31.913687    8792 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1212 19:57:31.917320    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:31.973317    8792 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:57:31.973590    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:31.977450    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.013673    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.013673    8792 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:57:32.017349    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.047537    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.047537    8792 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:57:32.047537    8792 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:57:32.048190    8792 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:57:32.051146    8792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:57:32.121447    8792 command_runner.go:130] > cgroupfs
	I1212 19:57:32.121447    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:32.121447    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:32.121447    8792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:32.121964    8792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:32.122106    8792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:32.126035    8792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:57:32.138764    8792 command_runner.go:130] > kubeadm
	I1212 19:57:32.138798    8792 command_runner.go:130] > kubectl
	I1212 19:57:32.138825    8792 command_runner.go:130] > kubelet
	I1212 19:57:32.138845    8792 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:57:32.143533    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:32.155602    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:57:32.179900    8792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:57:32.199342    8792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:57:32.222871    8792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:32.229151    8792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:57:32.234589    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:32.373967    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:32.974236    8792 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:57:32.974236    8792 certs.go:195] generating shared ca certs ...
	I1212 19:57:32.974236    8792 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:57:32.975214    8792 certs.go:257] generating profile certs ...
	I1212 19:57:32.976191    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:57:32.976561    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:57:32.976892    8792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:57:32.977527    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:57:32.977863    8792 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:57:32.978401    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:57:32.978646    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:57:32.979304    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem -> /usr/share/ca-certificates/13396.pem
	I1212 19:57:32.979449    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /usr/share/ca-certificates/133962.pem
	I1212 19:57:32.979529    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:32.980729    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:33.008686    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:57:33.035660    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:33.063247    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:33.108547    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:57:33.138500    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:57:33.165883    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:33.195246    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:57:33.221022    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:57:33.248791    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:57:33.274438    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:33.302337    8792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:33.324312    8792 ssh_runner.go:195] Run: openssl version
	I1212 19:57:33.335263    8792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:57:33.339948    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.356389    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:57:33.375441    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.387660    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.430281    8792 command_runner.go:130] > 51391683
	I1212 19:57:33.435287    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:57:33.452481    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.471523    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:57:33.489874    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.502698    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.544550    8792 command_runner.go:130] > 3ec20f2e
	I1212 19:57:33.549548    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:57:33.566747    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.583990    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:57:33.600438    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.614484    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.657826    8792 command_runner.go:130] > b5213941
	I1212 19:57:33.662138    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:57:33.678498    8792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:57:33.685111    8792 command_runner.go:130] > Device: 8,48	Inode: 15292       Links: 1
	I1212 19:57:33.685111    8792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:57:33.685797    8792 command_runner.go:130] > Access: 2025-12-12 19:53:20.728281925 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Modify: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Change: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] >  Birth: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.689949    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:57:33.733144    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.737823    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:57:33.780151    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.785054    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:57:33.827773    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.833292    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:57:33.875401    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.880293    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:57:33.922924    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.927940    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:57:33.970239    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.970239    8792 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:33.976672    8792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:57:34.008252    8792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:34.020977    8792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:57:34.021108    8792 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:57:34.021108    8792 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:57:34.025234    8792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:57:34.045139    8792 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:34.049590    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.107138    8792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.107889    8792 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-468800" cluster setting kubeconfig missing "functional-468800" context setting]
	I1212 19:57:34.107889    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.126355    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.126843    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.128169    8792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:57:34.128230    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.128230    8792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:57:34.132435    8792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:57:34.149951    8792 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 19:57:34.150008    8792 kubeadm.go:602] duration metric: took 128.8994ms to restartPrimaryControlPlane
	I1212 19:57:34.150032    8792 kubeadm.go:403] duration metric: took 179.7913ms to StartCluster
	I1212 19:57:34.150032    8792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.150032    8792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.151180    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.152111    8792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:57:34.152111    8792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:57:34.152386    8792 addons.go:70] Setting storage-provisioner=true in profile "functional-468800"
	I1212 19:57:34.152386    8792 addons.go:70] Setting default-storageclass=true in profile "functional-468800"
	I1212 19:57:34.152426    8792 addons.go:239] Setting addon storage-provisioner=true in "functional-468800"
	I1212 19:57:34.152475    8792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-468800"
	I1212 19:57:34.152564    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.152599    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:34.155555    8792 out.go:179] * Verifying Kubernetes components...
	I1212 19:57:34.161161    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.161613    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.163072    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:34.221534    8792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:34.221534    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.221534    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.222943    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.223481    8792 addons.go:239] Setting addon default-storageclass=true in "functional-468800"
	I1212 19:57:34.223558    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.223558    8792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.223558    8792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:57:34.227691    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.230256    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.287093    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.289848    8792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.289848    8792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:57:34.293811    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.345554    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:34.348560    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.426758    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.480013    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.480104    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.534162    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.538400    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538479    8792 retry.go:31] will retry after 344.600735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538532    8792 node_ready.go:35] waiting up to 6m0s for node "functional-468800" to be "Ready" ...
	I1212 19:57:34.539394    8792 type.go:168] "Request Body" body=""
	I1212 19:57:34.539597    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:34.541949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:34.608531    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.613599    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.613599    8792 retry.go:31] will retry after 216.683996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.835959    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.887701    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.908576    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.913475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.913475    8792 retry.go:31] will retry after 230.473341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.961197    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.966061    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.966061    8792 retry.go:31] will retry after 349.771822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.150121    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.221040    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.228247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.228333    8792 retry.go:31] will retry after 512.778483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.321063    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.394131    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.397148    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.397148    8792 retry.go:31] will retry after 487.352123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.542707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:35.542707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:35.545160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:35.747496    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.819613    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.822659    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.822659    8792 retry.go:31] will retry after 1.154413243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.890743    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.965246    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.972460    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.972460    8792 retry.go:31] will retry after 1.245938436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:36.545730    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:36.545730    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:36.549771    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:36.983387    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:37.090901    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.094847    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.094847    8792 retry.go:31] will retry after 1.548342934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.223991    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:37.295689    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.299705    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.299769    8792 retry.go:31] will retry after 1.579528606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.551013    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:37.551013    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:37.554154    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:38.554939    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:38.555432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:38.558234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:38.649390    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:38.725500    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.729499    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.729499    8792 retry.go:31] will retry after 2.648471583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.884600    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:38.953302    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.958318    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.958318    8792 retry.go:31] will retry after 2.058418403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:39.559077    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:39.559356    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:39.562225    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:40.562954    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:40.563393    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:40.566347    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:41.022091    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:41.102318    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.106247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.106247    8792 retry.go:31] will retry after 3.080320353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.384408    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:41.470520    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.473795    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.473795    8792 retry.go:31] will retry after 2.343057986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.566604    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:41.566604    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:41.569639    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:42.569950    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:42.569950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:42.573153    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:43.573545    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:43.573545    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:43.577655    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:43.821674    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:43.897847    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:43.901846    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:43.901846    8792 retry.go:31] will retry after 5.566518346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.193277    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:44.263403    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:44.269459    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.269459    8792 retry.go:31] will retry after 4.550082482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.577835    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:44.577835    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.580876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:44.581034    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:44.581158    8792 type.go:168] "Request Body" body=""
	I1212 19:57:44.581244    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.583508    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:45.583961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:45.583961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:45.587161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:46.587855    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:46.588199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:46.590728    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:47.591504    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:47.591504    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:47.594168    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:48.595392    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:48.595392    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:48.601208    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:57:48.824534    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:48.903714    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:48.909283    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:48.909283    8792 retry.go:31] will retry after 5.408295828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.475338    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:49.554836    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:49.559515    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.559515    8792 retry.go:31] will retry after 7.920709676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.602224    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:49.602480    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:49.605147    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:50.605575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:50.605575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:50.609094    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:51.610210    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:51.610210    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:51.613279    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:52.613438    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:52.613438    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:52.617857    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:53.618444    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:53.618444    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:53.622009    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:54.323567    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:54.399774    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:54.402767    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.402767    8792 retry.go:31] will retry after 5.650885129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.622233    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:54.622233    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.625806    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:54.625833    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:54.625833    8792 type.go:168] "Request Body" body=""
	I1212 19:57:54.625833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.628220    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:55.628567    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:55.628567    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:55.632067    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:56.632335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:56.632737    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:56.635417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:57.485659    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:57.566715    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:57.570725    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.570725    8792 retry.go:31] will retry after 5.889801353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.635601    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:57.636162    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:57.638437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:58.639201    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:58.639201    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:58.641202    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:59.642751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:59.642751    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:59.645820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:00.059077    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:00.141196    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:00.144743    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.144828    8792 retry.go:31] will retry after 12.880427161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.646278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:00.646278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:00.648514    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:01.648554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:01.648554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:01.652477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:02.652719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:02.652719    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:02.656865    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:03.466574    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:03.546687    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:03.552160    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.552160    8792 retry.go:31] will retry after 8.684375444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.657068    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:03.657068    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:03.660376    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:04.660836    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:04.661165    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.664417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:04.664489    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:04.664634    8792 type.go:168] "Request Body" body=""
	I1212 19:58:04.664723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.667029    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:05.667419    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:05.667419    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:05.670032    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:06.670984    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:06.670984    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:06.674354    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:07.675175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:07.675473    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:07.678161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:08.679000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:08.679000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:08.682498    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:09.683536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:09.684039    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:09.686703    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:10.687176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:10.687514    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:10.691708    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:11.692097    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:11.692097    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:11.695419    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:12.243184    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.329214    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:12.335592    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.335592    8792 retry.go:31] will retry after 19.078221738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.695735    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:12.695735    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:12.698564    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:13.030727    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:13.107677    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:13.111475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.111475    8792 retry.go:31] will retry after 24.078034123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.699329    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:13.699329    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:13.703201    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:14.703632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:14.703632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.706632    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:14.706632    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:14.706632    8792 type.go:168] "Request Body" body=""
	I1212 19:58:14.706632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.709461    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:15.709987    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:15.709987    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:15.713881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:16.714426    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:16.714947    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:16.717509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:17.718027    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:17.718027    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:17.721452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:18.721719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:18.722180    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:18.725521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:19.726174    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:19.726174    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:19.731274    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:20.731838    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:20.731838    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:20.735774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:21.736083    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:21.736083    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:21.739364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:22.740462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:22.740462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:22.743494    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:23.744218    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:23.744882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:23.747961    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:24.748401    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:24.748401    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.752939    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:58:24.752939    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:24.752939    8792 type.go:168] "Request Body" body=""
	I1212 19:58:24.752939    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.756295    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:25.756593    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:25.756959    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:25.759330    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:26.760825    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:26.760825    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:26.765414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:27.765653    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:27.765653    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:27.769152    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:28.770176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:28.770595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:28.774341    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:29.774498    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:29.774498    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:29.777488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:30.778437    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:30.778437    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:30.781414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:31.419403    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:31.498102    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:31.502554    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.502554    8792 retry.go:31] will retry after 21.655222228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.781482    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:31.781482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:31.783476    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:32.785130    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:32.785130    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:32.787452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:33.788547    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:33.788547    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:33.791489    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:34.792428    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:34.792428    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.794457    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:34.794457    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:34.794457    8792 type.go:168] "Request Body" body=""
	I1212 19:58:34.794457    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.796423    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:35.796926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:35.796926    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:35.800403    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:36.800694    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:36.800694    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:36.803902    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:37.195194    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:37.275035    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:37.278655    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.278655    8792 retry.go:31] will retry after 33.639329095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.804194    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:37.804194    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:37.807496    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:38.808801    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:38.808801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:38.811801    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:39.812262    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:39.812262    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:39.815469    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:40.816141    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:40.816141    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:40.819310    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:41.819973    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:41.819973    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:41.823039    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:42.824053    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:42.824053    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:42.827675    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:43.828345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:43.828345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:43.830350    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:44.830883    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:44.830883    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.834425    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:44.834502    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:44.834607    8792 type.go:168] "Request Body" body=""
	I1212 19:58:44.834703    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.836790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:45.837202    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:45.837202    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:45.840615    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:46.840700    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:46.840700    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:46.843992    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:47.844334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:47.844334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:47.847669    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:48.848509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:48.848509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:48.851509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:49.852471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:49.852471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:49.855417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:50.855889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:50.855889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:50.858888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:51.859324    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:51.859324    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:51.862752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:52.863764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:52.863764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:52.867051    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:53.163493    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:53.239799    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245721    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245920    8792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:58:53.867924    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:53.867924    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:53.871211    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:54.872502    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:54.872502    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.875103    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:54.875103    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:54.875635    8792 type.go:168] "Request Body" body=""
	I1212 19:58:54.875635    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.878074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:55.878391    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:55.878391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:55.881700    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:56.882314    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:56.882731    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:56.885332    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:57.886661    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:57.886661    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:57.890321    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:58.891069    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:58.891069    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:58.894045    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:59.894455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:59.894455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:59.897144    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:00.897724    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:00.897724    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:00.900925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:01.901327    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:01.901327    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:01.904820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:02.905377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:02.905668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:02.908844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:03.909357    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:03.909357    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:03.912567    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:04.913190    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:04.913190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.916248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:04.916248    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:04.916248    8792 type.go:168] "Request Body" body=""
	I1212 19:59:04.916248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.918608    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:05.918787    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:05.919084    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:05.921580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:06.921873    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:06.921873    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:06.925988    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:07.927045    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:07.927045    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:07.930359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:08.930575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:08.930575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:08.934014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:09.935175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:09.935175    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:09.939760    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:10.923536    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:59:10.940298    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:10.940298    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:10.942578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:11.011286    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:59:11.015418    8792 out.go:179] * Enabled addons: 
	I1212 19:59:11.018366    8792 addons.go:530] duration metric: took 1m36.8652549s for enable addons: enabled=[]
	I1212 19:59:11.943695    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:11.943695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:11.946524    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:12.947004    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:12.947004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:12.950107    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:13.950403    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:13.950403    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:13.953492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:14.953762    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:14.953762    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.957001    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:14.957153    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:14.957292    8792 type.go:168] "Request Body" body=""
	I1212 19:59:14.957344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.959399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:15.959732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:15.959732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:15.963481    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:16.964631    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:16.964631    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:16.967431    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:17.968335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:17.968716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:17.971422    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:18.975421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:18.975482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:18.981353    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:19.982483    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:19.982483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:19.986458    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:20.986878    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:20.986878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:20.990580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:21.991705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:21.991705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:21.994313    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:22.994828    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:22.994828    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:22.998384    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:23.999291    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:23.999572    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:24.001757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:25.002197    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:25.002197    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.006076    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:25.006076    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:25.006076    8792 type.go:168] "Request Body" body=""
	I1212 19:59:25.006076    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.008833    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:26.009236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:26.009483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:26.013280    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:27.013991    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:27.013991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:27.017339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:28.017861    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:28.017861    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:28.020302    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:29.021278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:29.021278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:29.024910    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:30.025134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:30.025134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:30.028490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:31.029228    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:31.029228    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:31.032192    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:32.033358    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:32.033358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:32.037022    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:33.037052    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:33.037052    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:33.039997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:34.040974    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:34.040974    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:34.044336    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:35.045158    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:35.045158    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.050424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:35.050478    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:35.050634    8792 type.go:168] "Request Body" body=""
	I1212 19:59:35.050710    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.053272    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:36.053659    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:36.053659    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:36.056921    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:37.057862    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:37.057983    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:37.061055    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:38.061705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:38.061705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:38.064401    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:39.065070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:39.065070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:39.070212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:40.070745    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:40.070745    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:40.074056    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:41.074238    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:41.074238    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:41.077817    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:42.078786    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:42.078786    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:42.082102    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:43.082439    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:43.082849    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:43.086074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:44.086257    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:44.086257    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:44.089158    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:45.089746    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:45.089746    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.093004    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:45.093004    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:45.093004    8792 type.go:168] "Request Body" body=""
	I1212 19:59:45.093004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.096683    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:46.097116    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:46.097615    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:46.100214    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:47.101361    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:47.101361    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:47.104657    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:48.104994    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:48.104994    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:48.108049    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:49.109535    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:49.109535    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:49.112664    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:50.113614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:50.113614    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:50.117411    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:51.117709    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:51.117709    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:51.121291    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:52.121914    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:52.122224    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:52.125068    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:53.125697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:53.126105    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:53.129084    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:54.129467    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:54.129467    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:54.133149    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:55.133722    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:55.133722    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.139098    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:55.139630    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:55.139774    8792 type.go:168] "Request Body" body=""
	I1212 19:59:55.139830    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.142212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:56.142471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:56.142471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:56.145561    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:57.146754    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:57.146754    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:57.150691    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:58.151315    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:58.151315    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:58.153802    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:59.154632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:59.154632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:59.157895    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:00.158286    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:00.158286    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:00.161521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:01.161851    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:01.161851    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:01.165478    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:02.166140    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:02.166140    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:02.169015    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:03.169549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:03.169549    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:03.179028    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	I1212 20:00:04.179254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:04.179632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:04.182303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:05.183057    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:05.183057    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.186169    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:05.186202    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:05.186368    8792 type.go:168] "Request Body" body=""
	I1212 20:00:05.186427    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.188490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:06.189369    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:06.189369    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:06.191767    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:07.192287    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:07.192287    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:07.195873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:08.196564    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:08.196564    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:08.200301    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:09.200652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:09.201050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:09.203873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:10.204621    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:10.204621    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:10.207991    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:11.208169    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:11.208695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:11.211546    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:12.212265    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:12.212265    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:12.215652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:13.216481    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:13.216481    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:13.218808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:14.219114    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:14.219114    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:14.222371    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:15.223587    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:15.223882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.226696    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:15.226696    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:15.226696    8792 type.go:168] "Request Body" body=""
	I1212 20:00:15.227288    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.230014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:16.230255    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:16.230702    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:16.234073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:17.234537    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:17.234537    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:17.238981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:18.240162    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:18.240450    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:18.242671    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:19.244029    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:19.244029    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:19.247551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:20.248288    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:20.248689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:20.251486    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:21.252448    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:21.252448    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:21.255871    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:22.256129    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:22.256129    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:22.259292    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:23.259853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:23.260152    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:23.263166    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:24.264181    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:24.264523    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:24.267309    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:25.267655    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:25.267655    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.270583    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:25.270681    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:25.270716    8792 type.go:168] "Request Body" body=""
	I1212 20:00:25.270716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.272780    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:26.273236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:26.273236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:26.276531    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:27.277612    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:27.277612    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:27.280399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:28.280976    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:28.281348    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:28.284050    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:29.284889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:29.284889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:29.288318    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:30.289605    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:30.289605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:30.292210    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:31.292623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:31.292623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:31.296173    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:32.297272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:32.297272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:32.300365    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:33.300747    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:33.300747    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:33.304627    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:34.305148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:34.305148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:34.307286    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:35.308221    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:35.308221    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.311525    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:35.311525    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:35.311525    8792 type.go:168] "Request Body" body=""
	I1212 20:00:35.311525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.314768    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:36.315303    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:36.315803    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:36.319885    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:37.320651    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:37.320651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:37.323804    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:38.324633    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:38.324633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:38.327596    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:39.328167    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:39.328827    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:39.332387    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:40.335388    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:40.335388    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:40.341222    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:00:41.342293    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:41.342293    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:41.346503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:42.346733    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:42.347391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:42.349901    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:43.350351    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:43.350351    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:43.353790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:44.354356    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:44.354951    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:44.357421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:45.357936    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:45.358254    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.361424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:45.361488    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:45.361558    8792 type.go:168] "Request Body" body=""
	I1212 20:00:45.361734    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.364678    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:46.364915    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:46.364915    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:46.368243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:47.368380    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:47.368380    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:47.371842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:48.372123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:48.372496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:48.375782    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:49.376328    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:49.376328    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:49.379339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:50.379689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:50.380090    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:50.383968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:51.384253    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:51.384253    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:51.387625    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:52.388421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:52.388421    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:52.391331    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:53.392103    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:53.392524    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:53.395936    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:54.396522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:54.396914    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:54.399312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:55.399853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:55.399853    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.404011    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:00:55.404054    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:55.404190    8792 type.go:168] "Request Body" body=""
	I1212 20:00:55.404190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.406466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:56.406717    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:56.406717    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:56.409652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:57.409829    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:57.409829    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:57.413808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:58.414272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:58.414272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:58.416891    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:59.418094    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:59.418094    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:59.422379    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:00.422928    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:00.423211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:00.425511    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:01.426949    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:01.427372    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:01.429940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:02.430697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:02.430894    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:02.434142    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:03.434554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:03.434554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:03.438125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:04.438646    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:04.438646    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:04.441873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:05.442580    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:05.443007    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.445227    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:05.445288    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:05.445349    8792 type.go:168] "Request Body" body=""
	I1212 20:01:05.445349    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.447160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 20:01:06.448042    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:06.448299    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:06.451364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:07.451519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:07.451519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:07.454072    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:08.455225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:08.455581    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:08.458949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:09.459239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:09.459483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:09.462124    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:10.462488    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:10.462488    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:10.465073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:11.466146    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:11.466334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:11.468858    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:12.469556    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:12.469556    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:12.472263    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:13.473070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:13.473070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:13.476554    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:14.476996    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:14.477386    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:14.479751    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:15.480652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:15.480652    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.484243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:15.484268    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:15.484379    8792 type.go:168] "Request Body" body=""
	I1212 20:01:15.484379    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.486997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:16.487837    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:16.487837    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:16.491073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:17.491865    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:17.492218    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:17.495307    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:18.495909    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:18.495909    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:18.499046    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:19.499542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:19.499542    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:19.502844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:20.503664    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:20.503664    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:20.506838    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:21.507123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:21.507496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:21.510126    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:22.510522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:22.510522    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:22.513442    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:23.514259    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:23.514259    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:23.516261    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:24.517279    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:24.517279    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:24.520541    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:25.521455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:25.521455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.524551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:25.524625    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:25.524657    8792 type.go:168] "Request Body" body=""
	I1212 20:01:25.524657    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.527752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:26.528360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:26.528723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:26.532917    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:27.533242    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:27.533242    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:27.537366    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:28.538106    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:28.538495    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:28.543549    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:01:29.544680    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:29.544680    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:29.548232    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:30.548450    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:30.548850    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:30.552101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:31.552352    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:31.552352    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:31.556248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:32.556689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:32.556689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:32.560889    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:33.561227    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:33.561227    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:33.565100    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:34.566919    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:34.566919    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:34.573248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1212 20:01:35.574024    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:35.574411    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.577335    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:35.577335    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:35.577335    8792 type.go:168] "Request Body" body=""
	I1212 20:01:35.577335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.579846    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:36.580067    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:36.580067    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:36.582937    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:37.583614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:37.584133    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:37.588041    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:38.588334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:38.588334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:38.590836    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:39.591771    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:39.592199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:39.596300    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:40.596570    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:40.596570    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:40.599738    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:41.600585    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:41.600964    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:41.603618    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:42.604326    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:42.604326    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:42.607888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:43.608118    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:43.608432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:43.611303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:44.612148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:44.612148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:44.615841    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:45.616729    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:45.616729    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.619383    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:45.619383    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:45.619913    8792 type.go:168] "Request Body" body=""
	I1212 20:01:45.619962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.624234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:46.624440    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:46.624440    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:46.631606    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1212 20:01:47.631772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:47.631772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:47.634254    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:48.635335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:48.635335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:48.638393    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:49.638538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:49.638538    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:49.642244    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:50.643486    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:50.643486    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:50.646864    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:51.647407    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:51.648062    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:51.651297    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:52.652310    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:52.652310    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:52.656003    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:53.657050    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:53.657050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:53.660358    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:54.661093    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:54.661093    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:54.664217    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:55.665772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:55.665772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.669789    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:01:55.669789    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:55.669789    8792 type.go:168] "Request Body" body=""
	I1212 20:01:55.669789    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.672845    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:56.673184    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:56.673578    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:56.676091    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:57.677260    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:57.677260    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:57.680492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:58.680999    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:58.681801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:58.684437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:59.685343    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:59.685343    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:59.688492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:00.689226    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:00.689226    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:00.692407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:01.693054    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:01.693054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:01.696414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:02.696707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:02.696707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:02.700656    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:03.701360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:03.701764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:03.704532    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:04.705055    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:04.705395    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:04.709582    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:05.709819    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:05.709819    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.712925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:05.712925    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:05.712925    8792 type.go:168] "Request Body" body=""
	I1212 20:02:05.712925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.714981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:06.715647    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:06.715989    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:06.718856    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:07.719549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:07.719950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:07.723017    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:08.723622    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:08.723991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:08.726824    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:09.727519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:09.727519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:09.731398    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:10.731940    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:10.732255    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:10.735314    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:11.736266    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:11.736266    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:11.739684    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:12.740926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:12.741346    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:12.744101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:13.745071    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:13.745071    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:13.749298    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:14.749764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:14.749764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:14.753277    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:15.753345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:15.753345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.755998    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:02:15.756520    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:15.756618    8792 type.go:168] "Request Body" body=""
	I1212 20:02:15.756676    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.758786    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:16.759785    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:16.759785    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:16.763359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:17.763591    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:17.763591    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:17.767014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:18.767248    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:18.767248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:18.770795    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:19.770962    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:19.770962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:19.773337    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:20.774557    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:20.774557    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:20.777421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:21.778527    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:21.778968    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:21.782312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:22.783001    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:22.783358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:22.785874    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:23.786668    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:23.786668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:23.789637    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:24.790000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:24.790000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:24.793439    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:25.793897    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:25.793897    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.797842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:25.797972    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:25.797972    8792 type.go:168] "Request Body" body=""
	I1212 20:02:25.797972    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.800999    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:26.801297    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:26.801297    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:26.804559    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:27.805028    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:27.805383    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:27.808770    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:28.809311    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:28.809864    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:28.812697    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:29.812980    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:29.812980    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:29.816569    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:30.816822    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:30.816822    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:30.819812    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:31.820344    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:31.820344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:31.824040    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:32.825223    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:32.825223    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:32.828636    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:33.828922    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:33.828922    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:33.833012    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:34.834105    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:34.834781    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:34.837739    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:35.838239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:35.839054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.842296    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:35.842377    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:35.842447    8792 type.go:168] "Request Body" body=""
	I1212 20:02:35.842525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.845253    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:36.845542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:36.845878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:36.849197    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:37.849575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:37.849575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:37.852774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:38.853254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:38.853925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:38.857020    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:39.857636    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:39.857636    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:39.861466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:40.861880    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:40.862546    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:40.865734    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:41.866931    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:41.866931    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:41.870407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:42.871284    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:42.871284    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:42.875909    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:43.876145    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:43.876145    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:43.879252    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:44.879595    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:44.879595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:44.882581    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:45.882793    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:45.882793    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.886772    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:45.886823    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:45.886823    8792 type.go:168] "Request Body" body=""
	I1212 20:02:45.886823    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.889488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:46.889817    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:46.889817    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:46.892533    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:47.893171    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:47.893605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:47.897327    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:48.898243    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:48.898243    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:48.901190    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:49.901751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:49.902239    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:49.905447    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:50.905509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:50.905509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:50.908968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:51.909246    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:51.909595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:51.913571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:52.914178    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:52.914178    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:52.917630    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:53.918264    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:53.918264    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:53.921578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:54.921843    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:54.921843    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:54.925388    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:55.925667    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:55.925667    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.929367    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:55.929367    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:55.929367    8792 type.go:168] "Request Body" body=""
	I1212 20:02:55.929367    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.932191    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:56.932533    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:56.932533    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:56.936530    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:57.937538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:57.937902    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:57.940876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:58.941300    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:58.941300    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:58.944722    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:59.945325    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:59.945325    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:59.948320    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:00.948833    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:00.948833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:00.952416    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:01.953225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:01.953225    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:01.956654    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:02.956910    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:02.956910    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:02.959952    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:03.960484    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:03.961032    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:03.963951    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:04.965244    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:04.965633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:04.968258    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:05.968774    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:05.968774    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.971651    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:05.971651    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:05.971651    8792 type.go:168] "Request Body" body=""
	I1212 20:03:05.971651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.974027    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:06.974449    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:06.974741    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:06.977205    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:07.977634    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:07.977798    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:07.981006    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:08.982134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:08.982134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:08.985063    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:09.985961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:09.985961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:09.988609    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:10.988755    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:10.988755    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:10.991472    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:11.992370    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:11.992370    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:11.996488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:12.996868    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:12.997258    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:13.000762    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:14.001059    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:14.001059    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:14.004368    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:15.004777    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:15.004777    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:15.007757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:16.008339    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:16.008625    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.011236    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:16.011236    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:16.011236    8792 type.go:168] "Request Body" body=""
	I1212 20:03:16.011236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.013832    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:17.014609    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:17.014609    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:17.018477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:18.018689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:18.018689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:18.022881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:19.023377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:19.023377    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:19.027571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:20.028073    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:20.028073    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:20.031057    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:21.031744    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:21.032211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:21.035492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:22.036462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:22.036462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:22.038986    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:23.039813    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:23.040216    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:23.042835    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:24.043623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:24.043623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:24.047746    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:25.048465    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:25.048465    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:25.051125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:26.051732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:26.051732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.055363    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:03:26.055363    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:26.055363    8792 type.go:168] "Request Body" body=""
	I1212 20:03:26.055363    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.058940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:27.059108    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:27.059476    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:27.062503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:28.062870    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:28.062870    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:28.066764    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:29.067215    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:29.067215    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:29.069923    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:30.070845    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:30.070845    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:30.073412    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:31.074536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:31.074979    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:31.077758    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:32.078060    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:32.078060    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:32.082117    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:33.083505    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:33.083505    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:33.086255    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:34.087642    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:34.087642    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:34.090378    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:34.543368    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 20:03:34.543799    8792 node_ready.go:38] duration metric: took 6m0.000497s for node "functional-468800" to be "Ready" ...
	I1212 20:03:34.547199    8792 out.go:203] 
	W1212 20:03:34.550016    8792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:03:34.550016    8792 out.go:285] * 
	W1212 20:03:34.552052    8792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:03:34.555048    8792 out.go:203] 
	
	
	==> Docker <==
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644022398Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644029098Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644048100Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644083703Z" level=info msg="Initializing buildkit"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.744677695Z" level=info msg="Completed buildkit initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750002934Z" level=info msg="Daemon has completed initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750231253Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750252555Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 19:57:30 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750265456Z" level=info msg="API listen on [::]:2376"
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:30 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 19:57:31 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Loaded network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 19:57:31 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:05:43.479432   20118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:05:43.480386   20118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:05:43.481622   20118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:05:43.483327   20118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:05:43.485602   20118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000814] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000769] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000773] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 19:57] CPU: 0 PID: 53838 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000857] RIP: 0033:0x7ff47e100b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7ff47e100af6.
	[  +0.000659] RSP: 002b:00007ffe8b002070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000766] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001155] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001186] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001227] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001126] FS:  0000000000000000 GS:  0000000000000000
	[  +0.862009] CPU: 6 PID: 53976 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000896] RIP: 0033:0x7f0cd9433b20
	[  +0.000429] Code: Unable to access opcode bytes at RIP 0x7f0cd9433af6.
	[  +0.000694] RSP: 002b:00007fff41d09ce0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:05:43 up  1:07,  0 user,  load average: 0.35, 0.34, 0.57
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:05:40 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:05:40 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 985.
	Dec 12 20:05:40 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:40 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:40 functional-468800 kubelet[19954]: E1212 20:05:40.776080   19954 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:05:40 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:05:40 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:05:41 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 986.
	Dec 12 20:05:41 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:41 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:41 functional-468800 kubelet[19967]: E1212 20:05:41.510402   19967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:05:41 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:05:41 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:05:42 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 987.
	Dec 12 20:05:42 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:42 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:42 functional-468800 kubelet[19980]: E1212 20:05:42.265370   19980 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:05:42 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:05:42 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:05:42 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 988.
	Dec 12 20:05:42 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:42 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:43 functional-468800 kubelet[20006]: E1212 20:05:43.020276   20006 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:05:43 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:05:43 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (585.2732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (53.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (53.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-468800 get pods
functional_test.go:756: (dbg) Non-zero exit: out\kubectl.exe --context functional-468800 get pods: exit status 1 (50.5281002s)

                                                
                                                
** stderr ** 
	E1212 20:05:55.229976    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:06:05.317454    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:06:15.359972    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:06:25.403815    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:06:35.445710    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out\\kubectl.exe --context functional-468800 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (612.4722ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.1737491s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr                  │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete  │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start   │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start   │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:latest                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add minikube-local-cache-test:functional-468800                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache delete minikube-local-cache-test:functional-468800                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl images                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ cache   │ functional-468800 cache reload                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ kubectl │ functional-468800 kubectl -- --context functional-468800 get pods                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:57:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:57:24.956785    8792 out.go:360] Setting OutFile to fd 1808 ...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:24.998786    8792 out.go:374] Setting ErrFile to fd 1700...
	I1212 19:57:24.998786    8792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:25.011786    8792 out.go:368] Setting JSON to false
	I1212 19:57:25.013782    8792 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3583,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:57:25.013782    8792 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:57:25.016780    8792 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:57:25.020780    8792 notify.go:221] Checking for updates...
	I1212 19:57:25.022780    8792 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:25.024782    8792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:25.027780    8792 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:57:25.030779    8792 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:57:25.034782    8792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:25.037790    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:25.037790    8792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:57:25.155476    8792 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:57:25.159985    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.387868    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.372369133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.391884    8792 out.go:179] * Using the docker driver based on existing profile
	I1212 19:57:25.396868    8792 start.go:309] selected driver: docker
	I1212 19:57:25.396868    8792 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.396868    8792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:25.402871    8792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:25.622678    8792 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:57:25.606400505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:57:25.701623    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:25.701623    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:25.701623    8792 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:25.706631    8792 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 19:57:25.708636    8792 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:57:25.711883    8792 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:57:25.714043    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:25.714043    8792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:57:25.714043    8792 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 19:57:25.714043    8792 cache.go:65] Caching tarball of preloaded images
	I1212 19:57:25.714043    8792 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:25.714043    8792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 19:57:25.714043    8792 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 19:57:25.792275    8792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:57:25.792275    8792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:57:25.792275    8792 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:57:25.792275    8792 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:25.792275    8792 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 19:57:25.792275    8792 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:57:25.792275    8792 fix.go:54] fixHost starting: 
	I1212 19:57:25.799955    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:25.853025    8792 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 19:57:25.853025    8792 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:57:25.856025    8792 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 19:57:25.856025    8792 machine.go:94] provisionDockerMachine start ...
	I1212 19:57:25.859025    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:25.918375    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:25.918479    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:25.918479    8792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:57:26.103358    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.103411    8792 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 19:57:26.107534    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.162431    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.162900    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.163030    8792 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 19:57:26.366993    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 19:57:26.370927    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.421027    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:26.422025    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:26.422025    8792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:26.592472    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:26.592472    8792 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 19:57:26.592472    8792 ubuntu.go:190] setting up certificates
	I1212 19:57:26.592472    8792 provision.go:84] configureAuth start
	I1212 19:57:26.596494    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:26.648327    8792 provision.go:143] copyHostCerts
	I1212 19:57:26.648492    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 19:57:26.648569    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 19:57:26.648569    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 19:57:26.649807    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 19:57:26.649946    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 19:57:26.649946    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 19:57:26.650879    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 19:57:26.650879    8792 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 19:57:26.650879    8792 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 19:57:26.651440    8792 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 19:57:26.782013    8792 provision.go:177] copyRemoteCerts
	I1212 19:57:26.785479    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:26.788240    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:26.842524    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:26.968619    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 19:57:26.968964    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:57:26.995759    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 19:57:26.995759    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:57:27.024847    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 19:57:27.024847    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:57:27.057221    8792 provision.go:87] duration metric: took 464.7444ms to configureAuth
	I1212 19:57:27.057221    8792 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:57:27.057221    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:27.061251    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.121889    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.122548    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.122604    8792 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 19:57:27.313910    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 19:57:27.313910    8792 ubuntu.go:71] root file system type: overlay
	I1212 19:57:27.313910    8792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 19:57:27.317488    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.376486    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.377052    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.377052    8792 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 19:57:27.577536    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 19:57:27.581688    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.635455    8792 main.go:143] libmachine: Using SSH client type: native
	I1212 19:57:27.635931    8792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6dfd7fd00] 0x7ff6dfd82860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 19:57:27.635954    8792 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 19:57:27.828516    8792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:27.828574    8792 machine.go:97] duration metric: took 1.9725293s to provisionDockerMachine
	I1212 19:57:27.828619    8792 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 19:57:27.828619    8792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:27.833127    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:27.836440    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:27.891552    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.022421    8792 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:28.031829    8792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_ID="12"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:57:28.031829    8792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:57:28.031829    8792 command_runner.go:130] > ID=debian
	I1212 19:57:28.031829    8792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:57:28.031829    8792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:57:28.031829    8792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:57:28.031829    8792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:57:28.031829    8792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:57:28.031829    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 19:57:28.032546    8792 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 19:57:28.033148    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 19:57:28.033204    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /etc/ssl/certs/133962.pem
	I1212 19:57:28.033277    8792 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 19:57:28.033277    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> /etc/test/nested/copy/13396/hosts
	I1212 19:57:28.037935    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 19:57:28.050821    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 19:57:28.081156    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 19:57:28.109846    8792 start.go:296] duration metric: took 281.2243ms for postStartSetup
	I1212 19:57:28.115818    8792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:28.118674    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.171853    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.302700    8792 command_runner.go:130] > 1%
	I1212 19:57:28.308193    8792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:57:28.316146    8792 command_runner.go:130] > 950G
	I1212 19:57:28.316204    8792 fix.go:56] duration metric: took 2.5239035s for fixHost
	I1212 19:57:28.316204    8792 start.go:83] releasing machines lock for "functional-468800", held for 2.5239035s
	I1212 19:57:28.320187    8792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 19:57:28.373764    8792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 19:57:28.378728    8792 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:28.378728    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.382043    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:28.432252    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.433503    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:28.550849    8792 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1212 19:57:28.550961    8792 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 19:57:28.550961    8792 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:57:28.556187    8792 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:28.565686    8792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:57:28.565686    8792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:57:28.570074    8792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:57:28.577782    8792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:57:28.578775    8792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:28.583114    8792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:28.595283    8792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:57:28.595283    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:28.595283    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:28.595283    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:28.617880    8792 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:57:28.622700    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:57:28.640953    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:57:28.655059    8792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:57:28.659503    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 19:57:28.659726    8792 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 19:57:28.659726    8792 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 19:57:28.678759    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.696413    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:57:28.715842    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:57:28.736528    8792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:28.755951    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:57:28.776240    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:57:28.795721    8792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:57:28.815051    8792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:28.829778    8792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:57:28.834204    8792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:28.852899    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:28.995620    8792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:57:29.167559    8792 start.go:496] detecting cgroup driver to use...
	I1212 19:57:29.167559    8792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:57:29.172911    8792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Unit]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 19:57:29.191693    8792 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 19:57:29.191693    8792 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1212 19:57:29.191693    8792 command_runner.go:130] > Wants=network-online.target containerd.service
	I1212 19:57:29.191693    8792 command_runner.go:130] > Requires=docker.socket
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitBurst=3
	I1212 19:57:29.191693    8792 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Service]
	I1212 19:57:29.191693    8792 command_runner.go:130] > Type=notify
	I1212 19:57:29.191693    8792 command_runner.go:130] > Restart=always
	I1212 19:57:29.191693    8792 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 19:57:29.191693    8792 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 19:57:29.191693    8792 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 19:57:29.191693    8792 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 19:57:29.191693    8792 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 19:57:29.191693    8792 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 19:57:29.191693    8792 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1212 19:57:29.191693    8792 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 19:57:29.191693    8792 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNOFILE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitNPROC=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > LimitCORE=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 19:57:29.191693    8792 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 19:57:29.191693    8792 command_runner.go:130] > TasksMax=infinity
	I1212 19:57:29.191693    8792 command_runner.go:130] > TimeoutStartSec=0
	I1212 19:57:29.191693    8792 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 19:57:29.191693    8792 command_runner.go:130] > Delegate=yes
	I1212 19:57:29.191693    8792 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 19:57:29.191693    8792 command_runner.go:130] > KillMode=process
	I1212 19:57:29.191693    8792 command_runner.go:130] > OOMScoreAdjust=-500
	I1212 19:57:29.191693    8792 command_runner.go:130] > [Install]
	I1212 19:57:29.191693    8792 command_runner.go:130] > WantedBy=multi-user.target
	I1212 19:57:29.196788    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.221924    8792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:29.312337    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:29.337554    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:57:29.357559    8792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:29.379522    8792 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 19:57:29.384213    8792 ssh_runner.go:195] Run: which cri-dockerd
	I1212 19:57:29.390808    8792 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 19:57:29.396438    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 19:57:29.409074    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 19:57:29.434191    8792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 19:57:29.578871    8792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 19:57:29.719341    8792 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 19:57:29.719341    8792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 19:57:29.746173    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 19:57:29.768870    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:29.905737    8792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 19:57:30.757640    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:30.780953    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 19:57:30.802218    8792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 19:57:30.829184    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:30.853409    8792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 19:57:30.994012    8792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 19:57:31.134627    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.283484    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 19:57:31.309618    8792 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 19:57:31.333897    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:31.475108    8792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 19:57:31.578219    8792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 19:57:31.597007    8792 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 19:57:31.600988    8792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 19:57:31.610316    8792 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 19:57:31.611281    8792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:57:31.611281    8792 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1212 19:57:31.611281    8792 command_runner.go:130] > Access: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Modify: 2025-12-12 19:57:31.474638770 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] > Change: 2025-12-12 19:57:31.484639595 +0000
	I1212 19:57:31.611281    8792 command_runner.go:130] >  Birth: -
	I1212 19:57:31.611281    8792 start.go:564] Will wait 60s for crictl version
	I1212 19:57:31.615844    8792 ssh_runner.go:195] Run: which crictl
	I1212 19:57:31.621876    8792 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:57:31.626999    8792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:57:31.672687    8792 command_runner.go:130] > Version:  0.1.0
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeName:  docker
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1212 19:57:31.672762    8792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:57:31.672790    8792 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 19:57:31.676132    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.713311    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.716489    8792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 19:57:31.755737    8792 command_runner.go:130] > 29.1.2
	I1212 19:57:31.761482    8792 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 19:57:31.765357    8792 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 19:57:31.901903    8792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 19:57:31.906530    8792 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 19:57:31.913687    8792 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1212 19:57:31.917320    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:31.973317    8792 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:57:31.973590    8792 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 19:57:31.977450    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.013673    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.013673    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.013673    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.013673    8792 docker.go:621] Images already preloaded, skipping extraction
	I1212 19:57:32.017349    8792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1212 19:57:32.047537    8792 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1212 19:57:32.047537    8792 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:32.047537    8792 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 19:57:32.047537    8792 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:57:32.047537    8792 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 19:57:32.048190    8792 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:57:32.051146    8792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 19:57:32.121447    8792 command_runner.go:130] > cgroupfs
	I1212 19:57:32.121447    8792 cni.go:84] Creating CNI manager for ""
	I1212 19:57:32.121447    8792 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:57:32.121447    8792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:32.121964    8792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:32.122106    8792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:32.126035    8792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:57:32.138764    8792 command_runner.go:130] > kubeadm
	I1212 19:57:32.138798    8792 command_runner.go:130] > kubectl
	I1212 19:57:32.138825    8792 command_runner.go:130] > kubelet
	I1212 19:57:32.138845    8792 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:57:32.143533    8792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:32.155602    8792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 19:57:32.179900    8792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:57:32.199342    8792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1212 19:57:32.222871    8792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:32.229151    8792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:57:32.234589    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:32.373967    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:32.974236    8792 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 19:57:32.974236    8792 certs.go:195] generating shared ca certs ...
	I1212 19:57:32.974236    8792 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 19:57:32.975214    8792 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 19:57:32.975214    8792 certs.go:257] generating profile certs ...
	I1212 19:57:32.976191    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 19:57:32.976561    8792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 19:57:32.976892    8792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:57:32.976892    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:57:32.977527    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 19:57:32.977863    8792 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 19:57:32.977863    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 19:57:32.978401    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 19:57:32.978646    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 19:57:32.978696    8792 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 19:57:32.979304    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem -> /usr/share/ca-certificates/13396.pem
	I1212 19:57:32.979449    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> /usr/share/ca-certificates/133962.pem
	I1212 19:57:32.979529    8792 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:32.980729    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:33.008686    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:57:33.035660    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:33.063247    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:33.108547    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:57:33.138500    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:57:33.165883    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:33.195246    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:57:33.221022    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 19:57:33.248791    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 19:57:33.274438    8792 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:33.302337    8792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:33.324312    8792 ssh_runner.go:195] Run: openssl version
	I1212 19:57:33.335263    8792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:57:33.339948    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.356389    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 19:57:33.375441    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.384435    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.387660    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 19:57:33.430281    8792 command_runner.go:130] > 51391683
	I1212 19:57:33.435287    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:57:33.452481    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.471523    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 19:57:33.489874    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.498068    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.502698    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 19:57:33.544550    8792 command_runner.go:130] > 3ec20f2e
	I1212 19:57:33.549548    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:57:33.566747    8792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.583990    8792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:57:33.600438    8792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.607945    8792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.614484    8792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:33.657826    8792 command_runner.go:130] > b5213941
	I1212 19:57:33.662138    8792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:57:33.678498    8792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:57:33.685111    8792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:57:33.685111    8792 command_runner.go:130] > Device: 8,48	Inode: 15292       Links: 1
	I1212 19:57:33.685111    8792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:57:33.685797    8792 command_runner.go:130] > Access: 2025-12-12 19:53:20.728281925 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Modify: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] > Change: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.685797    8792 command_runner.go:130] >  Birth: 2025-12-12 19:49:18.176374111 +0000
	I1212 19:57:33.689949    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:57:33.733144    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.737823    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:57:33.780151    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.785054    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:57:33.827773    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.833292    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:57:33.875401    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.880293    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:57:33.922924    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.927940    8792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:57:33.970239    8792 command_runner.go:130] > Certificate will not expire
	I1212 19:57:33.970239    8792 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:57:33.976672    8792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 19:57:34.008252    8792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:34.020977    8792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:57:34.021018    8792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:57:34.021108    8792 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:57:34.021108    8792 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:57:34.025234    8792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:57:34.045139    8792 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:34.049590    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.107138    8792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-468800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.107889    8792 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-468800" cluster setting kubeconfig missing "functional-468800" context setting]
	I1212 19:57:34.107889    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.126355    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.126843    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.128169    8792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:57:34.128230    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.128230    8792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:57:34.128350    8792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:57:34.132435    8792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:57:34.149951    8792 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 19:57:34.150008    8792 kubeadm.go:602] duration metric: took 128.8994ms to restartPrimaryControlPlane
	I1212 19:57:34.150032    8792 kubeadm.go:403] duration metric: took 179.7913ms to StartCluster
	I1212 19:57:34.150032    8792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.150032    8792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.151180    8792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:34.152111    8792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 19:57:34.152111    8792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:57:34.152386    8792 addons.go:70] Setting storage-provisioner=true in profile "functional-468800"
	I1212 19:57:34.152386    8792 addons.go:70] Setting default-storageclass=true in profile "functional-468800"
	I1212 19:57:34.152426    8792 addons.go:239] Setting addon storage-provisioner=true in "functional-468800"
	I1212 19:57:34.152475    8792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-468800"
	I1212 19:57:34.152564    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.152599    8792 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 19:57:34.155555    8792 out.go:179] * Verifying Kubernetes components...
	I1212 19:57:34.161161    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.161613    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.163072    8792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:34.221534    8792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:57:34.221534    8792 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:57:34.221534    8792 kapi.go:59] client config for functional-468800: &rest.Config{Host:"https://127.0.0.1:55778", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6e1d19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:57:34.222943    8792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:57:34.223481    8792 addons.go:239] Setting addon default-storageclass=true in "functional-468800"
	I1212 19:57:34.223558    8792 host.go:66] Checking if "functional-468800" exists ...
	I1212 19:57:34.223558    8792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.223558    8792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:57:34.227691    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.230256    8792 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 19:57:34.287093    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.289848    8792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.289848    8792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:57:34.293811    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.345554    8792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:57:34.348560    8792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 19:57:34.426758    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.480013    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.480104    8792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 19:57:34.534162    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.538400    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538479    8792 retry.go:31] will retry after 344.600735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.538532    8792 node_ready.go:35] waiting up to 6m0s for node "functional-468800" to be "Ready" ...
	I1212 19:57:34.539394    8792 type.go:168] "Request Body" body=""
	I1212 19:57:34.539597    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:34.541949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:34.608531    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.613599    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.613599    8792 retry.go:31] will retry after 216.683996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.835959    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:34.887701    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:34.908576    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.913475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.913475    8792 retry.go:31] will retry after 230.473341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.961197    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:34.966061    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:34.966061    8792 retry.go:31] will retry after 349.771822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.150121    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.221040    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.228247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.228333    8792 retry.go:31] will retry after 512.778483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.321063    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.394131    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.397148    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.397148    8792 retry.go:31] will retry after 487.352123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.542707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:35.542707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:35.545160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:35.747496    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:35.819613    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.822659    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.822659    8792 retry.go:31] will retry after 1.154413243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.890743    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:35.965246    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:35.972460    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:35.972460    8792 retry.go:31] will retry after 1.245938436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:36.545730    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:36.545730    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:36.549771    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:36.983387    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:37.090901    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.094847    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.094847    8792 retry.go:31] will retry after 1.548342934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.223991    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:37.295689    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:37.299705    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.299769    8792 retry.go:31] will retry after 1.579528606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:37.551013    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:37.551013    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:37.554154    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:38.554939    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:38.555432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:38.558234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:38.649390    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:38.725500    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.729499    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.729499    8792 retry.go:31] will retry after 2.648471583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.884600    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:38.953302    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:38.958318    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:38.958318    8792 retry.go:31] will retry after 2.058418403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:39.559077    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:39.559356    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:39.562225    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:40.562954    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:40.563393    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:40.566347    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:41.022091    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:41.102318    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.106247    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.106247    8792 retry.go:31] will retry after 3.080320353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.384408    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:41.470520    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:41.473795    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.473795    8792 retry.go:31] will retry after 2.343057986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:41.566604    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:41.566604    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:41.569639    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:42.569950    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:42.569950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:42.573153    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:43.573545    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:43.573545    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:43.577655    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:43.821674    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:43.897847    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:43.901846    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:43.901846    8792 retry.go:31] will retry after 5.566518346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.193277    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:44.263403    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:44.269459    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.269459    8792 retry.go:31] will retry after 4.550082482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:44.577835    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:44.577835    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.580876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:44.581034    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:44.581158    8792 type.go:168] "Request Body" body=""
	I1212 19:57:44.581244    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:44.583508    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:45.583961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:45.583961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:45.587161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:46.587855    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:46.588199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:46.590728    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:47.591504    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:47.591504    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:47.594168    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:48.595392    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:48.595392    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:48.601208    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:57:48.824534    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:48.903714    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:48.909283    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:48.909283    8792 retry.go:31] will retry after 5.408295828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.475338    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:49.554836    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:49.559515    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.559515    8792 retry.go:31] will retry after 7.920709676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:49.602224    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:49.602480    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:49.605147    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:50.605575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:50.605575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:50.609094    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:51.610210    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:51.610210    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:51.613279    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:52.613438    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:52.613438    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:52.617857    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:57:53.618444    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:53.618444    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:53.622009    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:54.323567    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:57:54.399774    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:54.402767    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.402767    8792 retry.go:31] will retry after 5.650885129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:54.622233    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:54.622233    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.625806    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:57:54.625833    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:57:54.625833    8792 type.go:168] "Request Body" body=""
	I1212 19:57:54.625833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:54.628220    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:55.628567    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:55.628567    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:55.632067    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:57:56.632335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:56.632737    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:56.635417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:57.485659    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:57:57.566715    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:57:57.570725    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.570725    8792 retry.go:31] will retry after 5.889801353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:57:57.635601    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:57.636162    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:57.638437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:58.639201    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:58.639201    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:58.641202    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:57:59.642751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:57:59.642751    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:57:59.645820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:00.059077    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:00.141196    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:00.144743    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.144828    8792 retry.go:31] will retry after 12.880427161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:00.646278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:00.646278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:00.648514    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:01.648554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:01.648554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:01.652477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:02.652719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:02.652719    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:02.656865    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:03.466574    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:03.546687    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:03.552160    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.552160    8792 retry.go:31] will retry after 8.684375444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:03.657068    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:03.657068    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:03.660376    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:04.660836    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:04.661165    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.664417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:04.664489    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:04.664634    8792 type.go:168] "Request Body" body=""
	I1212 19:58:04.664723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:04.667029    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:05.667419    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:05.667419    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:05.670032    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:06.670984    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:06.670984    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:06.674354    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:07.675175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:07.675473    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:07.678161    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:08.679000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:08.679000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:08.682498    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:09.683536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:09.684039    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:09.686703    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:10.687176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:10.687514    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:10.691708    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:11.692097    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:11.692097    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:11.695419    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:12.243184    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.329214    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:12.335592    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.335592    8792 retry.go:31] will retry after 19.078221738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:12.695735    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:12.695735    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:12.698564    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:13.030727    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:13.107677    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:13.111475    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.111475    8792 retry.go:31] will retry after 24.078034123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:13.699329    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:13.699329    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:13.703201    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:14.703632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:14.703632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.706632    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:14.706632    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:14.706632    8792 type.go:168] "Request Body" body=""
	I1212 19:58:14.706632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:14.709461    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:15.709987    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:15.709987    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:15.713881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:16.714426    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:16.714947    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:16.717509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:17.718027    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:17.718027    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:17.721452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:18.721719    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:18.722180    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:18.725521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:19.726174    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:19.726174    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:19.731274    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:20.731838    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:20.731838    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:20.735774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:21.736083    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:21.736083    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:21.739364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:22.740462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:22.740462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:22.743494    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:23.744218    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:23.744882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:23.747961    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:24.748401    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:24.748401    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.752939    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:58:24.752939    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:24.752939    8792 type.go:168] "Request Body" body=""
	I1212 19:58:24.752939    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:24.756295    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:25.756593    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:25.756959    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:25.759330    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:26.760825    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:26.760825    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:26.765414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:58:27.765653    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:27.765653    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:27.769152    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:28.770176    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:28.770595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:28.774341    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:29.774498    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:29.774498    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:29.777488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:30.778437    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:30.778437    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:30.781414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:31.419403    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:31.498102    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:31.502554    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.502554    8792 retry.go:31] will retry after 21.655222228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:31.781482    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:31.781482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:31.783476    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:32.785130    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:32.785130    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:32.787452    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:33.788547    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:33.788547    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:33.791489    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:34.792428    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:34.792428    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.794457    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:34.794457    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:34.794457    8792 type.go:168] "Request Body" body=""
	I1212 19:58:34.794457    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:34.796423    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 19:58:35.796926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:35.796926    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:35.800403    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:36.800694    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:36.800694    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:36.803902    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:37.195194    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:37.275035    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:37.278655    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.278655    8792 retry.go:31] will retry after 33.639329095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:58:37.804194    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:37.804194    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:37.807496    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:38.808801    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:38.808801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:38.811801    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:39.812262    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:39.812262    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:39.815469    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:40.816141    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:40.816141    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:40.819310    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:41.819973    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:41.819973    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:41.823039    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:42.824053    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:42.824053    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:42.827675    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:43.828345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:43.828345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:43.830350    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:44.830883    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:44.830883    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.834425    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:58:44.834502    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:44.834607    8792 type.go:168] "Request Body" body=""
	I1212 19:58:44.834703    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:44.836790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:45.837202    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:45.837202    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:45.840615    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:46.840700    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:46.840700    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:46.843992    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:47.844334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:47.844334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:47.847669    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:48.848509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:48.848509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:48.851509    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:49.852471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:49.852471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:49.855417    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:50.855889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:50.855889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:50.858888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:51.859324    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:51.859324    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:51.862752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:52.863764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:52.863764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:52.867051    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:53.163493    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:53.239799    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245721    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:58:53.245920    8792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:58:53.867924    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:53.867924    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:53.871211    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:54.872502    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:54.872502    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.875103    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 19:58:54.875103    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:58:54.875635    8792 type.go:168] "Request Body" body=""
	I1212 19:58:54.875635    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:54.878074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:55.878391    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:55.878391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:55.881700    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:56.882314    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:56.882731    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:56.885332    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:57.886661    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:57.886661    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:57.890321    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:58:58.891069    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:58.891069    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:58.894045    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:58:59.894455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:58:59.894455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:58:59.897144    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:00.897724    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:00.897724    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:00.900925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:01.901327    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:01.901327    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:01.904820    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:02.905377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:02.905668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:02.908844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:03.909357    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:03.909357    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:03.912567    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:04.913190    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:04.913190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.916248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:04.916248    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:04.916248    8792 type.go:168] "Request Body" body=""
	I1212 19:59:04.916248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:04.918608    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:05.918787    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:05.919084    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:05.921580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:06.921873    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:06.921873    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:06.925988    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:07.927045    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:07.927045    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:07.930359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:08.930575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:08.930575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:08.934014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:09.935175    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:09.935175    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:09.939760    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 19:59:10.923536    8792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:59:10.940298    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:10.940298    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:10.942578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:11.011286    8792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:59:11.011286    8792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:59:11.015418    8792 out.go:179] * Enabled addons: 
	I1212 19:59:11.018366    8792 addons.go:530] duration metric: took 1m36.8652549s for enable addons: enabled=[]
	I1212 19:59:11.943695    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:11.943695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:11.946524    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:12.947004    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:12.947004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:12.950107    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:13.950403    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:13.950403    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:13.953492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:14.953762    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:14.953762    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.957001    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:14.957153    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:14.957292    8792 type.go:168] "Request Body" body=""
	I1212 19:59:14.957344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:14.959399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:15.959732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:15.959732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:15.963481    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:16.964631    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:16.964631    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:16.967431    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:17.968335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:17.968716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:17.971422    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:18.975421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:18.975482    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:18.981353    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:19.982483    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:19.982483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:19.986458    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:20.986878    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:20.986878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:20.990580    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:21.991705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:21.991705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:21.994313    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:22.994828    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:22.994828    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:22.998384    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:23.999291    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:23.999572    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:24.001757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:25.002197    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:25.002197    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.006076    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:25.006076    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:25.006076    8792 type.go:168] "Request Body" body=""
	I1212 19:59:25.006076    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:25.008833    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:26.009236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:26.009483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:26.013280    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:27.013991    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:27.013991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:27.017339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:28.017861    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:28.017861    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:28.020302    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:29.021278    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:29.021278    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:29.024910    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:30.025134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:30.025134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:30.028490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:31.029228    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:31.029228    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:31.032192    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:32.033358    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:32.033358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:32.037022    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:33.037052    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:33.037052    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:33.039997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:34.040974    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:34.040974    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:34.044336    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:35.045158    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:35.045158    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.050424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:35.050478    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:35.050634    8792 type.go:168] "Request Body" body=""
	I1212 19:59:35.050710    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:35.053272    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:36.053659    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:36.053659    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:36.056921    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:37.057862    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:37.057983    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:37.061055    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:38.061705    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:38.061705    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:38.064401    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:39.065070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:39.065070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:39.070212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 19:59:40.070745    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:40.070745    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:40.074056    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:41.074238    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:41.074238    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:41.077817    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:42.078786    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:42.078786    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:42.082102    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:43.082439    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:43.082849    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:43.086074    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:44.086257    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:44.086257    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:44.089158    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:45.089746    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:45.089746    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.093004    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 19:59:45.093004    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:45.093004    8792 type.go:168] "Request Body" body=""
	I1212 19:59:45.093004    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:45.096683    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:46.097116    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:46.097615    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:46.100214    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:47.101361    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:47.101361    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:47.104657    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:48.104994    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:48.104994    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:48.108049    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:49.109535    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:49.109535    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:49.112664    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:50.113614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:50.113614    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:50.117411    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:51.117709    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:51.117709    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:51.121291    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:52.121914    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:52.122224    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:52.125068    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:53.125697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:53.126105    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:53.129084    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:54.129467    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:54.129467    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:54.133149    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:55.133722    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:55.133722    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.139098    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 19:59:55.139630    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 19:59:55.139774    8792 type.go:168] "Request Body" body=""
	I1212 19:59:55.139830    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:55.142212    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:56.142471    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:56.142471    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:56.145561    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:57.146754    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:57.146754    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:57.150691    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 19:59:58.151315    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:58.151315    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:58.153802    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 19:59:59.154632    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 19:59:59.154632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 19:59:59.157895    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:00.158286    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:00.158286    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:00.161521    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:01.161851    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:01.161851    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:01.165478    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:02.166140    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:02.166140    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:02.169015    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:03.169549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:03.169549    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:03.179028    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	I1212 20:00:04.179254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:04.179632    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:04.182303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:05.183057    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:05.183057    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.186169    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:05.186202    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:05.186368    8792 type.go:168] "Request Body" body=""
	I1212 20:00:05.186427    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:05.188490    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:06.189369    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:06.189369    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:06.191767    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:07.192287    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:07.192287    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:07.195873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:08.196564    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:08.196564    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:08.200301    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:09.200652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:09.201050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:09.203873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:10.204621    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:10.204621    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:10.207991    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:11.208169    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:11.208695    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:11.211546    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:12.212265    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:12.212265    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:12.215652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:13.216481    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:13.216481    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:13.218808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:14.219114    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:14.219114    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:14.222371    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:15.223587    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:15.223882    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.226696    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:15.226696    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:15.226696    8792 type.go:168] "Request Body" body=""
	I1212 20:00:15.227288    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:15.230014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:16.230255    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:16.230702    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:16.234073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:17.234537    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:17.234537    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:17.238981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:18.240162    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:18.240450    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:18.242671    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:19.244029    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:19.244029    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:19.247551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:20.248288    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:20.248689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:20.251486    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:21.252448    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:21.252448    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:21.255871    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:22.256129    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:22.256129    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:22.259292    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:23.259853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:23.260152    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:23.263166    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:24.264181    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:24.264523    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:24.267309    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:25.267655    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:25.267655    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.270583    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:00:25.270681    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:25.270716    8792 type.go:168] "Request Body" body=""
	I1212 20:00:25.270716    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:25.272780    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:26.273236    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:26.273236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:26.276531    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:27.277612    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:27.277612    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:27.280399    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:28.280976    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:28.281348    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:28.284050    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:29.284889    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:29.284889    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:29.288318    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:30.289605    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:30.289605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:30.292210    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:31.292623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:31.292623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:31.296173    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:32.297272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:32.297272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:32.300365    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:33.300747    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:33.300747    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:33.304627    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:34.305148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:34.305148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:34.307286    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:35.308221    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:35.308221    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.311525    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:35.311525    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:35.311525    8792 type.go:168] "Request Body" body=""
	I1212 20:00:35.311525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:35.314768    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:36.315303    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:36.315803    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:36.319885    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:37.320651    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:37.320651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:37.323804    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:38.324633    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:38.324633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:38.327596    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:39.328167    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:39.328827    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:39.332387    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:40.335388    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:40.335388    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:40.341222    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:00:41.342293    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:41.342293    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:41.346503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:00:42.346733    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:42.347391    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:42.349901    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:43.350351    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:43.350351    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:43.353790    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:44.354356    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:44.354951    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:44.357421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:45.357936    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:45.358254    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.361424    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:00:45.361488    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:45.361558    8792 type.go:168] "Request Body" body=""
	I1212 20:00:45.361734    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:45.364678    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:46.364915    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:46.364915    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:46.368243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:47.368380    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:47.368380    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:47.371842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:48.372123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:48.372496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:48.375782    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:49.376328    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:49.376328    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:49.379339    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:50.379689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:50.380090    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:50.383968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:51.384253    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:51.384253    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:51.387625    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:52.388421    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:52.388421    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:52.391331    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:53.392103    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:53.392524    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:53.395936    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:54.396522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:54.396914    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:54.399312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:55.399853    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:55.399853    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.404011    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:00:55.404054    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:00:55.404190    8792 type.go:168] "Request Body" body=""
	I1212 20:00:55.404190    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:55.406466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:56.406717    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:56.406717    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:56.409652    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:57.409829    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:57.409829    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:57.413808    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:00:58.414272    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:58.414272    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:58.416891    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:00:59.418094    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:00:59.418094    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:00:59.422379    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:00.422928    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:00.423211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:00.425511    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:01.426949    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:01.427372    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:01.429940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:02.430697    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:02.430894    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:02.434142    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:03.434554    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:03.434554    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:03.438125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:04.438646    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:04.438646    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:04.441873    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:05.442580    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:05.443007    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.445227    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:05.445288    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:05.445349    8792 type.go:168] "Request Body" body=""
	I1212 20:01:05.445349    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:05.447160    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 20:01:06.448042    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:06.448299    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:06.451364    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:07.451519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:07.451519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:07.454072    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:08.455225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:08.455581    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:08.458949    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:09.459239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:09.459483    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:09.462124    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:10.462488    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:10.462488    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:10.465073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:11.466146    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:11.466334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:11.468858    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:12.469556    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:12.469556    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:12.472263    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:13.473070    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:13.473070    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:13.476554    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:14.476996    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:14.477386    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:14.479751    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:15.480652    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:15.480652    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.484243    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:15.484268    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:15.484379    8792 type.go:168] "Request Body" body=""
	I1212 20:01:15.484379    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:15.486997    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:16.487837    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:16.487837    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:16.491073    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:17.491865    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:17.492218    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:17.495307    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:18.495909    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:18.495909    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:18.499046    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:19.499542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:19.499542    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:19.502844    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:20.503664    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:20.503664    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:20.506838    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:21.507123    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:21.507496    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:21.510126    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:22.510522    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:22.510522    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:22.513442    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:23.514259    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:23.514259    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:23.516261    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:24.517279    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:24.517279    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:24.520541    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:25.521455    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:25.521455    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.524551    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:01:25.524625    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:25.524657    8792 type.go:168] "Request Body" body=""
	I1212 20:01:25.524657    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:25.527752    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:26.528360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:26.528723    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:26.532917    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:27.533242    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:27.533242    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:27.537366    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:28.538106    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:28.538495    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:28.543549    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1212 20:01:29.544680    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:29.544680    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:29.548232    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:30.548450    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:30.548850    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:30.552101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:31.552352    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:31.552352    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:31.556248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:32.556689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:32.556689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:32.560889    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:33.561227    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:33.561227    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:33.565100    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:34.566919    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:34.566919    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:34.573248    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1212 20:01:35.574024    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:35.574411    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.577335    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:35.577335    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:35.577335    8792 type.go:168] "Request Body" body=""
	I1212 20:01:35.577335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:35.579846    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:36.580067    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:36.580067    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:36.582937    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:37.583614    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:37.584133    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:37.588041    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:38.588334    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:38.588334    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:38.590836    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:39.591771    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:39.592199    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:39.596300    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:40.596570    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:40.596570    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:40.599738    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:41.600585    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:41.600964    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:41.603618    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:42.604326    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:42.604326    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:42.607888    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:43.608118    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:43.608432    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:43.611303    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:44.612148    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:44.612148    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:44.615841    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:45.616729    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:45.616729    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.619383    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:01:45.619383    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:45.619913    8792 type.go:168] "Request Body" body=""
	I1212 20:01:45.619962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:45.624234    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:01:46.624440    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:46.624440    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:46.631606    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1212 20:01:47.631772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:47.631772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:47.634254    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:48.635335    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:48.635335    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:48.638393    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:49.638538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:49.638538    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:49.642244    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:50.643486    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:50.643486    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:50.646864    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:51.647407    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:51.648062    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:51.651297    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:52.652310    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:52.652310    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:52.656003    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:53.657050    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:53.657050    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:53.660358    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:54.661093    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:54.661093    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:54.664217    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:55.665772    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:55.665772    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.669789    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 20:01:55.669789    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:01:55.669789    8792 type.go:168] "Request Body" body=""
	I1212 20:01:55.669789    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:55.672845    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:56.673184    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:56.673578    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:56.676091    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:57.677260    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:57.677260    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:57.680492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:01:58.680999    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:58.681801    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:58.684437    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:01:59.685343    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:01:59.685343    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:01:59.688492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:00.689226    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:00.689226    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:00.692407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:01.693054    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:01.693054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:01.696414    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:02.696707    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:02.696707    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:02.700656    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:03.701360    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:03.701764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:03.704532    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:04.705055    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:04.705395    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:04.709582    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:05.709819    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:05.709819    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.712925    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:05.712925    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:05.712925    8792 type.go:168] "Request Body" body=""
	I1212 20:02:05.712925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:05.714981    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:06.715647    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:06.715989    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:06.718856    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:07.719549    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:07.719950    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:07.723017    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:08.723622    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:08.723991    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:08.726824    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:09.727519    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:09.727519    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:09.731398    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:10.731940    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:10.732255    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:10.735314    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:11.736266    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:11.736266    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:11.739684    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:12.740926    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:12.741346    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:12.744101    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:13.745071    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:13.745071    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:13.749298    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:14.749764    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:14.749764    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:14.753277    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:15.753345    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:15.753345    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.755998    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:02:15.756520    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:15.756618    8792 type.go:168] "Request Body" body=""
	I1212 20:02:15.756676    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:15.758786    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:16.759785    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:16.759785    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:16.763359    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:17.763591    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:17.763591    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:17.767014    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:18.767248    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:18.767248    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:18.770795    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:19.770962    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:19.770962    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:19.773337    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:20.774557    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:20.774557    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:20.777421    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:21.778527    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:21.778968    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:21.782312    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:22.783001    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:22.783358    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:22.785874    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:23.786668    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:23.786668    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:23.789637    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:24.790000    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:24.790000    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:24.793439    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:25.793897    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:25.793897    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.797842    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:25.797972    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:25.797972    8792 type.go:168] "Request Body" body=""
	I1212 20:02:25.797972    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:25.800999    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:26.801297    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:26.801297    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:26.804559    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:27.805028    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:27.805383    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:27.808770    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:28.809311    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:28.809864    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:28.812697    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:29.812980    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:29.812980    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:29.816569    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:30.816822    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:30.816822    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:30.819812    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:31.820344    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:31.820344    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:31.824040    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:32.825223    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:32.825223    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:32.828636    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:33.828922    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:33.828922    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:33.833012    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:34.834105    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:34.834781    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:34.837739    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:35.838239    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:35.839054    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.842296    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:35.842377    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:35.842447    8792 type.go:168] "Request Body" body=""
	I1212 20:02:35.842525    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:35.845253    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:36.845542    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:36.845878    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:36.849197    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:37.849575    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:37.849575    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:37.852774    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:38.853254    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:38.853925    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:38.857020    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:39.857636    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:39.857636    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:39.861466    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:40.861880    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:40.862546    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:40.865734    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:41.866931    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:41.866931    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:41.870407    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:42.871284    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:42.871284    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:42.875909    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:02:43.876145    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:43.876145    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:43.879252    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:44.879595    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:44.879595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:44.882581    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:45.882793    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:45.882793    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.886772    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:45.886823    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:45.886823    8792 type.go:168] "Request Body" body=""
	I1212 20:02:45.886823    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:45.889488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:46.889817    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:46.889817    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:46.892533    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:47.893171    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:47.893605    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:47.897327    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:48.898243    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:48.898243    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:48.901190    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:49.901751    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:49.902239    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:49.905447    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:50.905509    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:50.905509    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:50.908968    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:51.909246    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:51.909595    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:51.913571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:52.914178    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:52.914178    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:52.917630    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:53.918264    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:53.918264    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:53.921578    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:54.921843    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:54.921843    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:54.925388    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:55.925667    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:55.925667    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.929367    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:02:55.929367    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:02:55.929367    8792 type.go:168] "Request Body" body=""
	I1212 20:02:55.929367    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:55.932191    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:56.932533    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:56.932533    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:56.936530    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:57.937538    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:57.937902    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:57.940876    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:02:58.941300    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:58.941300    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:58.944722    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:02:59.945325    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:02:59.945325    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:02:59.948320    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:00.948833    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:00.948833    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:00.952416    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:01.953225    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:01.953225    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:01.956654    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:02.956910    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:02.956910    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:02.959952    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:03.960484    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:03.961032    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:03.963951    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:04.965244    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:04.965633    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:04.968258    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:05.968774    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:05.968774    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.971651    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:05.971651    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:05.971651    8792 type.go:168] "Request Body" body=""
	I1212 20:03:05.971651    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:05.974027    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:06.974449    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:06.974741    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:06.977205    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:07.977634    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:07.977798    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:07.981006    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:08.982134    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:08.982134    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:08.985063    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:09.985961    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:09.985961    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:09.988609    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:10.988755    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:10.988755    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:10.991472    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:11.992370    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:11.992370    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:11.996488    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:12.996868    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:12.997258    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:13.000762    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:14.001059    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:14.001059    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:14.004368    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:15.004777    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:15.004777    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:15.007757    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:16.008339    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:16.008625    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.011236    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:16.011236    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:16.011236    8792 type.go:168] "Request Body" body=""
	I1212 20:03:16.011236    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:16.013832    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:17.014609    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:17.014609    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:17.018477    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:18.018689    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:18.018689    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:18.022881    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:19.023377    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:19.023377    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:19.027571    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:20.028073    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:20.028073    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:20.031057    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:21.031744    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:21.032211    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:21.035492    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:22.036462    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:22.036462    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:22.038986    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:23.039813    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:23.040216    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:23.042835    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:24.043623    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:24.043623    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:24.047746    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:25.048465    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:25.048465    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:25.051125    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:26.051732    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:26.051732    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.055363    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1212 20:03:26.055363    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): Get "https://127.0.0.1:55778/api/v1/nodes/functional-468800": EOF
	I1212 20:03:26.055363    8792 type.go:168] "Request Body" body=""
	I1212 20:03:26.055363    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:26.058940    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:27.059108    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:27.059476    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:27.062503    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:28.062870    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:28.062870    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:28.066764    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:03:29.067215    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:29.067215    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:29.069923    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:30.070845    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:30.070845    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:30.073412    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:31.074536    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:31.074979    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:31.077758    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:32.078060    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:32.078060    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:32.082117    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1212 20:03:33.083505    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:33.083505    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:33.086255    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1212 20:03:34.087642    8792 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55778/api/v1/nodes/functional-468800"
	I1212 20:03:34.087642    8792 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55778/api/v1/nodes/functional-468800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1212 20:03:34.090378    8792 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1212 20:03:34.543368    8792 node_ready.go:55] error getting node "functional-468800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 20:03:34.543799    8792 node_ready.go:38] duration metric: took 6m0.000497s for node "functional-468800" to be "Ready" ...
	I1212 20:03:34.547199    8792 out.go:203] 
	W1212 20:03:34.550016    8792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:03:34.550016    8792 out.go:285] * 
	W1212 20:03:34.552052    8792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:03:34.555048    8792 out.go:203] 
	
	
	==> Docker <==
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644022398Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644029098Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644048100Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.644083703Z" level=info msg="Initializing buildkit"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.744677695Z" level=info msg="Completed buildkit initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750002934Z" level=info msg="Daemon has completed initialization"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750231253Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750252555Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 19:57:30 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 19:57:30 functional-468800 dockerd[10484]: time="2025-12-12T19:57:30.750265456Z" level=info msg="API listen on [::]:2376"
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:30 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 19:57:30 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 19:57:31 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Loaded network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 19:57:31 functional-468800 cri-dockerd[10806]: time="2025-12-12T19:57:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 19:57:31 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:06:37.224281   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:06:37.225359   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:06:37.226428   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:06:37.228757   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:06:37.230175   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000814] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000769] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000773] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 19:57] CPU: 0 PID: 53838 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000857] RIP: 0033:0x7ff47e100b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7ff47e100af6.
	[  +0.000659] RSP: 002b:00007ffe8b002070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000766] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001155] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001186] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001227] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001126] FS:  0000000000000000 GS:  0000000000000000
	[  +0.862009] CPU: 6 PID: 53976 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000896] RIP: 0033:0x7f0cd9433b20
	[  +0.000429] Code: Unable to access opcode bytes at RIP 0x7f0cd9433af6.
	[  +0.000694] RSP: 002b:00007fff41d09ce0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:06:37 up  1:08,  0 user,  load average: 0.48, 0.36, 0.56
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:06:34 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:06:34 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1057.
	Dec 12 20:06:34 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:34 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:34 functional-468800 kubelet[20962]: E1212 20:06:34.769186   20962 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:06:34 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:06:34 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:06:35 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1058.
	Dec 12 20:06:35 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:35 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:35 functional-468800 kubelet[20976]: E1212 20:06:35.514762   20976 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:06:35 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:06:35 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:06:36 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1059.
	Dec 12 20:06:36 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:36 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:36 functional-468800 kubelet[21004]: E1212 20:06:36.258163   21004 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:06:36 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:06:36 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:06:36 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1060.
	Dec 12 20:06:36 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:36 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:06:37 functional-468800 kubelet[21061]: E1212 20:06:37.011796   21061 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:06:37 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:06:37 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (574.1536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (53.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (740.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 20:07:31.855799   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:08:54.929682   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:09:54.774069   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:31.859688   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:57.850426   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:54.777764   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:31.863008   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-468800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m16.4013438s)

                                                
                                                
-- stdout --
	* [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-468800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m16.4110972s for "functional-468800" cluster.
I1212 20:18:55.079649   13396 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (598.5058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.2640629s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete  │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start   │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start   │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:latest                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add minikube-local-cache-test:functional-468800                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache delete minikube-local-cache-test:functional-468800                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl images                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ cache   │ functional-468800 cache reload                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ kubectl │ functional-468800 kubectl -- --context functional-468800 get pods                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ start   │ -p functional-468800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:06:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:06:38.727985    1528 out.go:360] Setting OutFile to fd 1056 ...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.773098    1528 out.go:374] Setting ErrFile to fd 1212...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.787709    1528 out.go:368] Setting JSON to false
	I1212 20:06:38.790304    1528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4136,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:06:38.790304    1528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:06:38.796304    1528 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:06:38.800290    1528 notify.go:221] Checking for updates...
	I1212 20:06:38.800290    1528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:06:38.802303    1528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:06:38.805306    1528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:06:38.807332    1528 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:06:38.808856    1528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:06:38.812430    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:38.812430    1528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:06:38.929707    1528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:06:38.933677    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.195122    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.177384092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.201119    1528 out.go:179] * Using the docker driver based on existing profile
	I1212 20:06:39.203117    1528 start.go:309] selected driver: docker
	I1212 20:06:39.203117    1528 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.203117    1528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:06:39.209122    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.449342    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.430307853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.528922    1528 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:06:39.529468    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:39.529468    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:39.529468    1528 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.533005    1528 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 20:06:39.535095    1528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 20:06:39.537607    1528 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:06:39.540959    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:39.540959    1528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:06:39.540959    1528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 20:06:39.540959    1528 cache.go:65] Caching tarball of preloaded images
	I1212 20:06:39.541554    1528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 20:06:39.541554    1528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 20:06:39.541554    1528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 20:06:39.619509    1528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:06:39.619509    1528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:06:39.619509    1528 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:06:39.619509    1528 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:06:39.619509    1528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 20:06:39.620041    1528 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:06:39.620041    1528 fix.go:54] fixHost starting: 
	I1212 20:06:39.627157    1528 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 20:06:39.683014    1528 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 20:06:39.683376    1528 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:06:39.686124    1528 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 20:06:39.686124    1528 machine.go:94] provisionDockerMachine start ...
	I1212 20:06:39.689814    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.744908    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.745476    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.745476    1528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:06:39.930965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:39.931078    1528 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 20:06:39.934795    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.989752    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.990452    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.990452    1528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 20:06:40.176756    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:40.180410    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.235554    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.236742    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.236742    1528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:06:40.410965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:40.410965    1528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 20:06:40.410965    1528 ubuntu.go:190] setting up certificates
	I1212 20:06:40.410965    1528 provision.go:84] configureAuth start
	I1212 20:06:40.414835    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:40.468680    1528 provision.go:143] copyHostCerts
	I1212 20:06:40.468680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 20:06:40.468680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 20:06:40.468680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 20:06:40.469680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 20:06:40.469680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 20:06:40.469680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 20:06:40.470682    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 20:06:40.470682    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 20:06:40.470682    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 20:06:40.471679    1528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 20:06:40.521679    1528 provision.go:177] copyRemoteCerts
	I1212 20:06:40.526217    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:06:40.529224    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.578843    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:40.705122    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:06:40.732235    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:06:40.758034    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:06:40.787536    1528 provision.go:87] duration metric: took 376.5012ms to configureAuth
	I1212 20:06:40.787564    1528 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:06:40.788016    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:40.791899    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.847433    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.847433    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.847433    1528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 20:06:41.031514    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 20:06:41.031514    1528 ubuntu.go:71] root file system type: overlay
	I1212 20:06:41.031514    1528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 20:06:41.035525    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.089326    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.090065    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.090155    1528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 20:06:41.283431    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 20:06:41.287473    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.343081    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.343562    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.343562    1528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 20:06:41.525616    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:41.525616    1528 machine.go:97] duration metric: took 1.8394714s to provisionDockerMachine
	I1212 20:06:41.525616    1528 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 20:06:41.525616    1528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:06:41.530519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:06:41.534083    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.586502    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.720007    1528 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:06:41.727943    1528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:06:41.727943    1528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 20:06:41.728602    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 20:06:41.729437    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 20:06:41.733519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 20:06:41.745958    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 20:06:41.772738    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 20:06:41.802626    1528 start.go:296] duration metric: took 277.0071ms for postStartSetup
	I1212 20:06:41.807164    1528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:06:41.809505    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.864695    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.985729    1528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:06:41.994649    1528 fix.go:56] duration metric: took 2.3745808s for fixHost
	I1212 20:06:41.994649    1528 start.go:83] releasing machines lock for "functional-468800", held for 2.3751133s
	I1212 20:06:41.998707    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:42.059230    1528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 20:06:42.063903    1528 ssh_runner.go:195] Run: cat /version.json
	I1212 20:06:42.063903    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.066691    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.116356    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:42.117357    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 20:06:42.228585    1528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 20:06:42.232646    1528 ssh_runner.go:195] Run: systemctl --version
	I1212 20:06:42.247485    1528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:06:42.257236    1528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:06:42.263875    1528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:06:42.279473    1528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:06:42.279473    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.279473    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.283549    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:42.307873    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:06:42.326439    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:06:42.341366    1528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:06:42.345268    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 20:06:42.347179    1528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 20:06:42.347179    1528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 20:06:42.365551    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.385740    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:06:42.407021    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.427172    1528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:06:42.448213    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:06:42.467444    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:06:42.487296    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:06:42.507050    1528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:06:42.524437    1528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:06:42.541928    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:42.701987    1528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:06:42.867618    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.867618    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.872524    1528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 20:06:42.900833    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:42.922770    1528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:06:42.982495    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:43.005292    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:06:43.026719    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:43.052829    1528 ssh_runner.go:195] Run: which cri-dockerd
	I1212 20:06:43.064606    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 20:06:43.079549    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 20:06:43.104999    1528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 20:06:43.240280    1528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 20:06:43.379193    1528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 20:06:43.379358    1528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 20:06:43.405761    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 20:06:43.427392    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:43.565288    1528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 20:06:44.374705    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:06:44.396001    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 20:06:44.418749    1528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 20:06:44.445721    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:44.466663    1528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 20:06:44.598807    1528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 20:06:44.740962    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:44.883493    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 20:06:44.907977    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 20:06:44.931006    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.071046    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 20:06:45.171465    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:45.190143    1528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 20:06:45.194535    1528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 20:06:45.202518    1528 start.go:564] Will wait 60s for crictl version
	I1212 20:06:45.206873    1528 ssh_runner.go:195] Run: which crictl
	I1212 20:06:45.221614    1528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:06:45.263002    1528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 20:06:45.266767    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.308717    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.348580    1528 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 20:06:45.352493    1528 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 20:06:45.482840    1528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 20:06:45.487311    1528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 20:06:45.498523    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:45.552748    1528 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:06:45.554383    1528 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:06:45.554933    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:45.558499    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.589105    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.589105    1528 docker.go:621] Images already preloaded, skipping extraction
	I1212 20:06:45.592742    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.625313    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.625313    1528 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:06:45.625313    1528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 20:06:45.625829    1528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:06:45.629232    1528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 20:06:45.698056    1528 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:06:45.698078    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:45.698133    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:45.698180    1528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:06:45.698180    1528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:06:45.698180    1528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:06:45.702170    1528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:06:45.714209    1528 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:06:45.719390    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:06:45.731628    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 20:06:45.753236    1528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:06:45.772644    1528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1212 20:06:45.798125    1528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:06:45.809796    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.998447    1528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:06:46.682417    1528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 20:06:46.682417    1528 certs.go:195] generating shared ca certs ...
	I1212 20:06:46.682417    1528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:06:46.683216    1528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 20:06:46.683331    1528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 20:06:46.683331    1528 certs.go:257] generating profile certs ...
	I1212 20:06:46.683996    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 20:06:46.685029    1528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 20:06:46.685554    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 20:06:46.686999    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:06:46.715172    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:06:46.745329    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:06:46.775248    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:06:46.804288    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:06:46.833541    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:06:46.858974    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:06:46.883320    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:06:46.912462    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:06:46.937010    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 20:06:46.963968    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 20:06:46.987545    1528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:06:47.014201    1528 ssh_runner.go:195] Run: openssl version
	I1212 20:06:47.028684    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.047532    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:06:47.066889    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.074545    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.078818    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.128719    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:06:47.145523    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.162300    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 20:06:47.179220    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.188551    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.193732    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.241331    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:06:47.258219    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.276085    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 20:06:47.293199    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.300084    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.304026    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.352991    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:06:47.371677    1528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:06:47.384558    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:06:47.433291    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:06:47.480566    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:06:47.530653    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:06:47.582068    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:06:47.630287    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:06:47.673527    1528 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:47.678147    1528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.710789    1528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:06:47.723256    1528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:06:47.723256    1528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:06:47.727283    1528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:06:47.740989    1528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.744500    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:47.805147    1528 kubeconfig.go:125] found "functional-468800" server: "https://127.0.0.1:55778"
	I1212 20:06:47.813022    1528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:06:47.830078    1528 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:49:17.606323144 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:06:45.789464240 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:06:47.830078    1528 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:06:47.833739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.872403    1528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:06:47.898698    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:06:47.911626    1528 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 19:53 /etc/kubernetes/scheduler.conf
	
	I1212 20:06:47.916032    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:06:47.934293    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:06:47.947871    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.952020    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:06:47.971701    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:06:47.986795    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.991166    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:06:48.008021    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:06:48.023761    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:48.029138    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:06:48.047659    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:06:48.063995    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.141323    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.685789    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.933405    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.007626    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.088118    1528 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:06:49.091668    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:49.594772    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.093859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.594422    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.093806    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.593915    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.093893    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.594038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.093417    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.593495    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.594146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.095283    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.594629    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.094166    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.593508    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.093792    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.594191    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.094043    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.593447    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.095461    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.594593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.093887    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.593742    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.093796    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.593635    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.594164    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.094112    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.593477    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.093750    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.595391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.094206    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.595179    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.094740    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.594021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.092923    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.594420    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.093543    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.593353    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.093866    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.594009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.593564    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.594786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.093907    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.595728    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.095070    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.594017    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.094874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.595001    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.094580    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.594646    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.095074    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.594850    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.094067    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.594147    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.094262    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.594277    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.094229    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.593986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.093873    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.593102    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.093881    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.594308    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.093613    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.594040    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.094021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.594274    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.093605    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.594142    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.094736    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.593265    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.094197    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.594872    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.095670    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.093920    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.596679    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.094004    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.594458    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.093715    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.594515    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.094349    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.594711    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.094230    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.594083    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.093810    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.595024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.094786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.594107    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.094421    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.594761    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.095704    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.596396    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.094385    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.593669    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.094137    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.595560    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.094405    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.595146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.094116    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.595721    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.096666    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.595141    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.094696    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.595232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.094232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.595329    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.094121    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.594251    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.094024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.594712    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.094868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.594370    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.093917    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.594667    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:49.093256    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:49.126325    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.126325    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:49.130353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:49.158022    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.158022    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:49.162811    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:49.190525    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.190525    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:49.194310    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:49.220030    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.220030    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:49.223677    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:49.249986    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.249986    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:49.253970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:49.282441    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.282441    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:49.286057    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:49.315225    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.315248    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:49.315306    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:49.315306    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:49.374436    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:49.374436    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:49.404204    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:49.404204    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:49.493575    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:49.493575    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:49.493575    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:49.537752    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:49.537752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.109985    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:52.133820    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:52.164388    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.164388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:52.168109    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:52.195605    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.195605    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:52.199164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:52.229188    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.229188    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:52.232745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:52.256990    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.256990    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:52.261539    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:52.290862    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.290862    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:52.294555    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:52.324957    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.324957    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:52.330284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:52.359197    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.359197    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:52.359197    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:52.359197    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:52.386524    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:52.386524    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:52.470690    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:52.470690    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:52.470690    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:52.511513    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:52.511513    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.560676    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:52.560676    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.127058    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:55.150663    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:55.181456    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.181456    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:55.184641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:55.217269    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.217269    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:55.220911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:55.250346    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.250346    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:55.254082    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:55.285676    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.285706    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:55.288968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:55.315854    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.315854    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:55.319386    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:55.348937    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.348937    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:55.352894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:55.380789    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.380853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:55.380853    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:55.380883    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:55.463944    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:55.463944    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:55.463944    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:55.507780    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:55.507780    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:55.561906    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:55.561906    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.623372    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:55.623372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.160009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:58.184039    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:58.215109    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.215109    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:58.218681    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:58.247778    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.247778    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:58.251301    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:58.278710    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.278710    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:58.282296    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:58.308953    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.308953    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:58.312174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:58.339973    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.340049    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:58.343731    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:58.374943    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.374943    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:58.378660    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:58.405372    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.405372    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:58.405372    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:58.405372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:58.453718    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:58.453718    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:58.514502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:58.514502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.544394    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:58.544394    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:58.623232    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:58.623232    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:58.623232    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.169113    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:01.192583    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:01.222434    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.222434    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:01.225873    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:01.253020    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.253020    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:01.257395    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:01.286407    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.286407    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:01.290442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:01.317408    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.317408    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:01.321138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:01.348820    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.348820    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:01.352926    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:01.383541    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.383541    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:01.387373    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:01.415400    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.415431    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:01.415431    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:01.415466    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:01.481183    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:01.481183    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:01.512132    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:01.512132    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:01.598560    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:01.598601    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:01.598601    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.641848    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:01.641848    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.202764    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:04.225393    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:04.257048    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.257048    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:04.261463    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:04.289329    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.289329    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:04.295911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:04.324136    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.324205    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:04.329272    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:04.355941    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.355941    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:04.359744    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:04.389386    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.389461    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:04.393063    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:04.421465    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.421465    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:04.425377    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:04.454159    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.454159    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:04.454185    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:04.454221    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:04.499238    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:04.499238    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.546668    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:04.546668    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:04.614181    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:04.614181    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:04.646155    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:04.646155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:04.746527    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.252038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:07.276838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:07.307770    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.307770    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:07.311473    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:07.338086    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.338086    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:07.343809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:07.373687    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.373687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:07.377399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:07.406083    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.406083    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:07.409835    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:07.437651    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.437651    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:07.441428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:07.468369    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.468369    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:07.472164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:07.503047    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.503047    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:07.503047    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:07.503811    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:07.531856    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:07.531856    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:07.618451    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.618451    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:07.618451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:07.661072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:07.661072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:07.708185    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:07.708185    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.277741    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:10.301882    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:10.334646    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.334646    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:10.338176    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:10.369543    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.369543    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:10.372853    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:10.405159    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.405159    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:10.408623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:10.436491    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.436491    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:10.440653    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:10.471674    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.471674    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:10.475616    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:10.503923    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.503923    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:10.507960    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:10.532755    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.532755    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:10.532755    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:10.532755    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.596502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:10.596502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:10.627352    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:10.627352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:10.716582    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:10.716582    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:10.716582    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:10.758177    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:10.758177    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.312261    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:13.336629    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:13.366321    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.366321    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:13.370440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:13.398643    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.398643    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:13.402381    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:13.432456    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.432481    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:13.436213    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:13.464635    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.464711    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:13.468308    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:13.495284    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.495284    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:13.499271    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:13.528325    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.528325    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:13.531787    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:13.562227    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.562227    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:13.562227    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:13.562227    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:13.663593    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:13.663593    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:13.663593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:13.704702    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:13.704702    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.753473    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:13.753473    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:13.816534    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:13.816534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.353541    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:16.376390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:16.407214    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.407214    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:16.410992    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:16.441225    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.441225    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:16.444710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:16.474803    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.474803    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:16.478736    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:16.507490    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.507490    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:16.510890    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:16.542100    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.542196    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:16.546032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:16.575799    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.575799    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:16.579959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:16.607409    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.607409    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:16.607409    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:16.607409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.635159    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:16.635159    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:16.716319    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:16.716319    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:16.716319    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:16.759176    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:16.759176    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:16.808150    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:16.808180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.374586    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:19.397466    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:19.428699    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.428699    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:19.432104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:19.459357    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.459357    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:19.463506    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:19.492817    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.492862    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:19.496262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:19.524604    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.524633    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:19.528245    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:19.554030    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.554030    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:19.557659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:19.585449    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.585449    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:19.589270    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:19.617715    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.617715    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:19.617715    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:19.617715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:19.665679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:19.665679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.731378    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:19.731378    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:19.760660    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:19.760660    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:19.846488    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:19.846488    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:19.846534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.396054    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:22.420446    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:22.451208    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.451246    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:22.455255    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:22.482900    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.482900    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:22.486411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:22.515383    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.515383    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:22.518824    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:22.550034    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.550034    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:22.553623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:22.581020    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.581020    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:22.585628    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:22.612869    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.612869    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:22.616928    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:22.644472    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.644472    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:22.644472    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:22.644472    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:22.708075    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:22.708075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:22.738243    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:22.738270    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:22.821664    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:22.821664    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:22.821664    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.864165    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:22.864165    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.420933    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:25.445913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:25.482750    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.482780    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:25.486866    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:25.513327    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.513327    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:25.516888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:25.544296    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.544296    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:25.547411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:25.577831    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.577831    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:25.581764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:25.611577    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.611577    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:25.614994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:25.643683    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.643683    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:25.647543    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:25.673764    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.673764    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:25.673764    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:25.673764    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:25.756845    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:25.756845    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:25.756845    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:25.796355    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:25.796355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.848330    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:25.848330    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:25.908271    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:25.908271    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:28.444198    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:28.466730    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:28.495218    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.496317    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:28.499838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:28.526946    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.526946    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:28.531098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:28.558957    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.558957    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:28.563084    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:28.591401    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.591401    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:28.594622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:28.621536    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.621536    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:28.625599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:28.652819    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.652819    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:28.655938    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:28.684007    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.684007    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:28.684049    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:28.684049    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:28.766993    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:28.766993    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:28.766993    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:28.808427    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:28.808427    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:28.854005    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:28.854005    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:28.915072    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:28.915072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.448340    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:31.482817    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:31.516888    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.516948    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:31.520762    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:31.548829    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.548829    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:31.552634    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:31.580202    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.580202    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:31.583832    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:31.612644    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.612644    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:31.616408    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:31.641662    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.641662    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:31.645105    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:31.674858    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.674858    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:31.678481    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:31.708742    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.708742    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:31.708742    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:31.708742    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.737537    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:31.737537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:31.815915    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:31.815915    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:31.815915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:31.855387    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:31.855387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:31.902882    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:31.902882    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.468874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:34.492525    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:34.524158    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.524158    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:34.528390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:34.555356    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.555356    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:34.558734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:34.589102    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.589171    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:34.592795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:34.621829    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.621829    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:34.625204    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:34.653376    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.653376    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:34.657009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:34.683738    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.683738    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:34.686742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:34.714674    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.714674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:34.714674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:34.714674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.779026    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:34.779026    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:34.808978    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:34.808978    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:34.892063    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:34.892063    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:34.892063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:34.931531    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:34.931531    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:37.485139    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:37.507669    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:37.539156    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.539156    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:37.543011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:37.573040    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.573040    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:37.576524    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:37.606845    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.606845    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:37.610640    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:37.637362    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.637362    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:37.640345    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:37.667170    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.667203    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:37.670535    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:37.699517    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.699517    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:37.703317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:37.728898    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.728898    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:37.728898    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:37.728898    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:37.794369    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:37.794369    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:37.824287    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:37.824287    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:37.909344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:37.909344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:37.909344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:37.954162    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:37.954162    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.506487    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:40.531085    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:40.562228    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.562228    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:40.566239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:40.592782    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.592782    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:40.597032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:40.623771    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.623771    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:40.627181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:40.653272    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.653272    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:40.657007    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:40.684331    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.684331    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:40.687951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:40.717873    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.718396    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:40.722742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:40.750968    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.750968    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:40.750968    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:40.750968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:40.780652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:40.780652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.862566    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.862566    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:40.862566    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:40.901731    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.901731    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.950141    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.950141    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.517065    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:43.542117    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:43.570769    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.570769    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:43.574614    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:43.606209    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.606209    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:43.610144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:43.636742    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.636742    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:43.640713    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:43.671147    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.671166    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:43.675284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:43.702707    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.702707    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.709331    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:43.739560    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.739560    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:43.743495    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:43.773460    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.773460    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.773460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.773460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.839426    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.839426    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.869067    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.869067    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.956418    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.956418    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:43.956418    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:43.999225    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.999225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.559969    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:46.583306    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:46.616304    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.616304    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:46.620185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:46.649980    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.649980    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.653901    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:46.679706    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.679706    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.683349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:46.709377    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.709377    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:46.713435    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:46.743714    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.743714    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.747353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:46.774831    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.774831    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:46.778444    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:46.803849    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.803849    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.803849    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:46.803849    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:46.846976    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.898873    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.898873    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.960800    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.960800    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.992131    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.992131    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:47.078211    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.584391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:49.609888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:49.644530    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.644530    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:49.648078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:49.676237    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.676237    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.680633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:49.711496    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.711496    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.714503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:49.741598    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.741598    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:49.746023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:49.774073    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.774073    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.780499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:49.807422    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.807422    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:49.811492    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:49.837105    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.837105    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.837105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.837105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.919888    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.919888    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:49.919888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:49.961375    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.961375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:50.029040    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:50.029040    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:50.091715    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:50.091715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:52.626760    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:52.650138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:52.682125    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.682125    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:52.685499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:52.716677    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.716677    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.720251    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:52.750215    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.750215    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.753203    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:52.783410    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.783410    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:52.786745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:52.816028    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.816028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.819028    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:52.847808    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.847808    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:52.851676    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:52.880388    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.880388    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.880388    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:52.880388    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:52.927060    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.927060    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.980540    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.980540    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.040013    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.040013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.068682    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.068682    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:53.153542    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:55.659454    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:55.682885    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:55.711696    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.711696    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:55.718399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:55.746229    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.746229    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.750441    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:55.780178    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.780210    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.784012    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:55.811985    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.811985    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:55.816792    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:55.847996    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.847996    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:55.851745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:55.883521    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.883521    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:55.886915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:55.914853    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.914853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:55.914853    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:55.914853    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:55.960920    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:55.960920    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.026011    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.026011    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.053113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.053113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.136578    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:56.136578    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:56.136578    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:58.683199    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:58.705404    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:58.735584    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.735584    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:58.739795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:58.770569    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.770569    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:58.774526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:58.804440    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.804440    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:58.808498    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:58.836009    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.836009    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:58.840208    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:58.869192    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.869192    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:58.872945    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:58.902237    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.902237    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:58.905993    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:58.933450    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.933617    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:58.933617    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:58.933617    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:58.976315    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:58.976391    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:59.038199    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.038199    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.068976    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.068976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.160516    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.160516    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:59.160516    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:01.709859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:01.733860    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:01.762957    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.762957    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:01.766889    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:01.793351    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.793351    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:01.797156    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:01.823801    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.823801    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:01.827545    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:01.858811    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.858811    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:01.862667    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:01.888526    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.888601    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:01.892330    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:01.921800    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.921834    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:01.925710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:01.954630    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.954630    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:01.954630    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:01.954630    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.019929    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.019929    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.050304    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.050304    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.137016    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.137016    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:02.137016    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:02.181380    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.181380    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:04.738393    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:04.761261    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:04.788560    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.788594    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:04.792550    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:04.822339    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.822339    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:04.826135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:04.854461    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.854531    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:04.858147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:04.886243    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.886243    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:04.890144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:04.918123    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.918123    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:04.922152    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:04.949493    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.949557    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:04.953111    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:04.980390    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.980390    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:04.980390    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:04.980390    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.043888    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.043888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.075474    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.075474    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.156773    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.156773    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:05.156773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:05.198847    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.198847    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:07.752600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.774442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:07.801273    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.801315    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:07.804806    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:07.833315    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.833315    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:07.837119    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:07.866393    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.866417    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:07.869980    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:07.898480    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.898480    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:07.902426    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:07.929231    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.929231    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:07.932443    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:07.962786    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.962786    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:07.966343    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:07.993681    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.993681    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:07.993681    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:07.993681    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.075996    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.075996    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:08.075996    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:08.115751    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:08.115751    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:08.167959    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:08.167959    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:08.229990    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:08.229990    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:10.765802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:10.787970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:10.817520    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.817520    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:10.821188    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:10.850905    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.850905    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:10.854741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:10.882098    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.882098    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:10.885759    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:10.915908    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.915931    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:10.919484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:10.947704    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.947704    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:10.951840    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:10.979998    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.979998    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:10.983440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:11.012620    1528 logs.go:282] 0 containers: []
	W1212 20:09:11.012620    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:11.012620    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:11.012620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:11.075910    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:11.075910    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.105013    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:11.105013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:11.184242    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:11.184242    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:11.184242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:11.228072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:11.228072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:13.782352    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.806071    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:13.835380    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.835380    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:13.839913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:13.866644    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.866644    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:13.870648    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:13.900617    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.900687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:13.904431    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:13.928026    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.928026    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:13.931830    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:13.961813    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.961813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:13.965790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:13.993658    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.993658    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:13.997303    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:14.025708    1528 logs.go:282] 0 containers: []
	W1212 20:09:14.025708    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:14.025708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:14.025708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:14.106478    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:14.106478    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:14.106478    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:14.148128    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:14.148128    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:14.203808    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:14.203885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:14.267083    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:14.267083    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:16.803844    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:16.828076    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:16.857370    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.857370    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:16.861602    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:16.888928    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.888928    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.892594    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:16.918950    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.918950    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.922184    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:16.949697    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.949697    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:16.953615    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:16.980582    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.980582    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.984239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:17.011537    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.011537    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:17.015236    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:17.044025    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.044025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.044059    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.044059    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.108593    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.108593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.140984    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.140984    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:17.223600    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:17.223647    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:17.223647    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:17.265808    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.265808    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:19.827665    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:19.848754    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:19.880440    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.880440    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:19.884631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:19.911688    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.911688    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:19.915503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:19.942894    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.942894    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:19.946623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:19.974622    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.974622    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:19.978983    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:20.005201    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.005201    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:20.009244    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:20.040298    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.040298    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:20.043935    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:20.073267    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.073267    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:20.073267    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:20.073267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:20.139351    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:20.139351    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:20.170692    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:20.170692    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:20.255758    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:20.255758    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:20.255758    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:20.296082    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:20.296082    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:22.852656    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:22.877113    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:22.907531    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.907601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:22.911006    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:22.938103    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.938103    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:22.941741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:22.969757    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.969757    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:22.973641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:23.003718    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.003718    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:23.007427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:23.034105    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.034105    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:23.038551    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:23.068440    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.068440    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:23.072250    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:23.099797    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.099797    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:23.099797    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:23.099797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:23.127441    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:23.127441    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:23.213420    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:23.213420    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:23.213420    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:23.258155    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:23.258155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:23.304413    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:23.304413    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:25.871188    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:25.894216    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:25.924994    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.924994    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:25.928893    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:25.956143    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.956143    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:25.961174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:25.988898    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.988898    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:25.993364    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:26.021169    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.021233    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:26.024829    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:26.051922    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.051922    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:26.055062    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:26.082542    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.082542    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:26.086788    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:26.117355    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.117355    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:26.117355    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:26.117355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:26.180352    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:26.180352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:26.211105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:26.211105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:26.296971    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:26.296971    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:26.296971    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:26.338711    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:26.338711    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:28.896860    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:28.920643    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:28.950389    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.950389    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:28.955391    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:28.982117    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.982117    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:28.986142    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:29.015662    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.015662    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:29.019455    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:29.049660    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.049660    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:29.053631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:29.081889    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.081889    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:29.086411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:29.114138    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.114138    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:29.119659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:29.150078    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.150078    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:29.150078    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:29.150078    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:29.214085    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:29.214085    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:29.248111    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:29.248111    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:29.331531    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:29.331531    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:29.331573    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:29.371475    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:29.371475    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:31.925581    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:31.948416    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:31.979393    1528 logs.go:282] 0 containers: []
	W1212 20:09:31.979436    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:31.982941    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:32.012671    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.012745    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:32.016490    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:32.044571    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.044571    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:32.049959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:32.077737    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.077737    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:32.082023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:32.112680    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.112680    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:32.116732    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:32.144079    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.144079    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:32.147365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:32.175674    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.175674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:32.175674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:32.175674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:32.238433    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:32.238433    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:32.268680    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:32.268680    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:32.350924    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:32.351446    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:32.351446    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:32.393409    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:32.393409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:34.949675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:34.974371    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:35.003673    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.003673    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:35.007894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:35.036794    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.036794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:35.040718    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:35.068827    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.068827    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:35.073552    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:35.101505    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.101505    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:35.105374    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:35.132637    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.132637    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:35.135977    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:35.164108    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.164108    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:35.168327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:35.196237    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.196237    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:35.196237    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:35.196237    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:35.225096    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:35.225096    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:35.310720    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:35.310720    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:35.310720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:35.352640    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:35.352640    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:35.405163    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:35.405684    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:37.970126    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:37.993740    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:38.021567    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.021567    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:38.025733    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:38.054259    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.054259    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:38.058230    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:38.091609    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.091609    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:38.094726    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:38.121402    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.121402    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:38.124780    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:38.156230    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.156230    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:38.159968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:38.187111    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.187111    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:38.191000    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:38.219114    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.219114    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:38.219114    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:38.219163    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:38.267592    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:38.267642    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:38.332291    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:38.332291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:38.362654    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:38.362654    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:38.450249    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:38.450249    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:38.450249    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.000122    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:41.025061    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:41.056453    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.056453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:41.060356    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:41.090046    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.090046    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:41.096769    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:41.124375    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.124375    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:41.128276    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:41.155835    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.155835    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:41.159800    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:41.188748    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.188748    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:41.193110    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:41.220152    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.220152    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:41.224010    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:41.252532    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.252532    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:41.252532    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:41.252532    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:41.316983    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:41.316983    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:41.347558    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:41.347558    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:41.428225    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:41.428225    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:41.428225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.470919    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:41.470919    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:44.030446    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:44.055047    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:44.084459    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.084459    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:44.088206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:44.117052    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.117052    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:44.120537    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:44.147556    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.147556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:44.152098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:44.180075    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.180075    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:44.183790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:44.210767    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.210767    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:44.214367    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:44.240217    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.240217    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:44.244696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:44.273318    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.273318    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:44.273318    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:44.273371    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:44.339517    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:44.339517    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:44.369771    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:44.369771    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:44.450064    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:44.450064    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:44.450064    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:44.493504    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:44.493504    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:47.062950    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:47.087994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:47.118381    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.118409    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:47.121556    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:47.150429    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.150429    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:47.154790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:47.182604    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.182604    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:47.186262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:47.213354    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.213354    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:47.217174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:47.246442    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.246442    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:47.251292    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:47.280336    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.280336    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:47.283865    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:47.311245    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.311323    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:47.311323    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:47.311323    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:47.374063    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:47.374063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:47.404257    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:47.404257    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:47.493784    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:47.493784    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:47.493784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:47.546267    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:47.546267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:50.104321    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:50.126581    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:50.155564    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.155564    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:50.160428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:50.189268    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.189268    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:50.192916    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:50.218955    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.218955    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:50.222686    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:50.249342    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.249342    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:50.253397    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:50.283028    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.283028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:50.286951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:50.325979    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.325979    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:50.329622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:50.358362    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.358362    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:50.358362    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:50.358362    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:50.422488    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:50.422488    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:50.452652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:50.452652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:50.550551    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:50.550602    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:50.550602    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:50.590552    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:50.590552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.158722    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:53.182259    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:53.211903    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.211903    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:53.215402    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:53.243958    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.243958    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:53.247562    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:53.275751    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.275751    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:53.279763    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:53.306836    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.306836    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:53.310872    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:53.337813    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.337813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:53.341633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:53.371291    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.371291    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:53.374974    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:53.401726    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.401726    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:53.401726    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:53.401726    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:53.484480    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:53.484480    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:53.484480    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:53.548050    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:53.548050    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.599287    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:53.599439    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:53.660624    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:53.660624    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.196823    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:56.221135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:56.250407    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.250407    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:56.254016    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:56.285901    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.285901    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:56.290067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:56.318341    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.318341    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:56.321789    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:56.352739    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.352739    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:56.356470    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:56.384106    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.384106    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:56.388211    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:56.415890    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.415890    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:56.420087    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:56.447932    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.447932    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:56.447932    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:56.447932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.477708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:56.477708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:56.588387    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:56.588387    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:56.588387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:56.628140    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:56.629024    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:56.673720    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:56.673720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.242052    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:59.264739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:59.293601    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.293601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:59.297772    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:59.324701    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.324701    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:59.328642    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:59.358373    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.358373    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:59.362425    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:59.392638    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.392638    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:59.396206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:59.423777    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.423777    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:59.427998    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:59.455368    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.455368    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:59.460647    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:59.488029    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.488029    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:59.488029    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:59.488029    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.548806    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:59.548806    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:59.580620    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:59.580620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:59.670291    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:59.670291    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:59.670291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:59.715000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:59.715000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:02.271675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:02.295613    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:02.328792    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.328792    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:02.332483    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:02.364136    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.364136    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:02.368415    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:02.396018    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.396018    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:02.399987    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:02.426946    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.426946    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:02.430641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:02.457307    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.457307    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:02.461639    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:02.490776    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.490776    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:02.495011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:02.535030    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.535030    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:02.535030    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:02.535030    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:02.598020    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:02.598020    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:02.627885    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:02.627885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:02.704890    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:02.704939    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:02.704939    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:02.743781    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:02.743781    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.296529    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:05.320338    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:05.350975    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.350975    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:05.354341    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:05.384954    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.384954    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:05.389226    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:05.416593    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.416663    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:05.420370    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:05.448275    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.448306    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:05.451950    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:05.489214    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.489214    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:05.492826    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:05.542815    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.542815    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:05.546994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:05.577967    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.577967    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:05.577967    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:05.577967    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:05.666752    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:05.666752    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:05.666752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:05.710699    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:05.710699    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.761552    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:05.761552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:05.824698    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:05.824698    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.358868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:08.384185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:08.414077    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.414077    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:08.417802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:08.449585    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.449585    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:08.453707    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:08.481690    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.481690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:08.485802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:08.526849    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.526849    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:08.530588    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:08.561211    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.561211    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:08.565127    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:08.592694    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.592781    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:08.596577    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:08.625262    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.625262    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:08.625262    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:08.625335    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:08.685169    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:08.685169    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.715897    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:08.715897    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:08.803701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:08.803701    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:08.803701    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:08.843054    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:08.843054    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:11.399600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:11.423207    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:11.452824    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.452824    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:11.456632    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:11.485718    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.485718    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:11.489975    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:11.516373    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.516442    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:11.520086    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:11.550008    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.550008    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:11.553479    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:11.582422    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.582422    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:11.586067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:11.614204    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.614204    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:11.617891    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:11.647117    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.647117    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:11.647117    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:11.647117    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:11.708885    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:11.708885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:11.738490    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:11.738490    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:11.827046    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:11.827046    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:11.827107    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:11.866493    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:11.866493    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.418219    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:14.441326    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:14.471617    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.471617    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:14.475764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:14.525977    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.525977    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:14.530095    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:14.559065    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.559065    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:14.562300    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:14.591222    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.591222    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:14.595004    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:14.623409    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.623409    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:14.626892    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:14.654709    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.654709    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:14.658517    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:14.685033    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.685033    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:14.685033    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:14.685033    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:14.729797    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:14.729797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.775571    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:14.775571    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:14.837326    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:14.837326    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:14.868773    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:14.868773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:14.947701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.453450    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:17.476221    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:17.508293    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.508388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:17.512181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:17.543844    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.543844    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:17.547662    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:17.575201    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.575201    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:17.578822    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:17.606210    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.606210    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:17.609909    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:17.635671    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.635671    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:17.639317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:17.668567    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.668567    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:17.671701    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:17.698754    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.698754    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:17.698754    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:17.698835    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:17.746368    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:17.746368    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:17.807375    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:17.807375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:17.838385    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:17.838385    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:17.926603    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.926603    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:17.926648    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.475641    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:20.498334    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:20.527197    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.527197    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:20.530922    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:20.557934    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.557934    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:20.561696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:20.589458    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.589458    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:20.593618    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:20.618953    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.619013    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:20.622779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:20.650087    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.650087    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:20.653349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:20.680898    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.680898    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:20.684841    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:20.711841    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.711841    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:20.711841    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:20.711841    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:20.773325    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:20.773325    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:20.802932    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:20.802932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:20.882468    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:20.882468    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:20.882468    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.924918    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:20.924918    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:23.483925    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:23.503925    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:23.531502    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.531502    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:23.535209    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:23.566493    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.566493    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:23.569915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:23.598869    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.598869    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:23.603128    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:23.629658    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.629658    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:23.633104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:23.659718    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.659718    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:23.663327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:23.693156    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.693156    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:23.696530    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:23.727025    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.727025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:23.727025    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:23.727025    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:23.788970    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:23.788970    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:23.819732    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:23.819732    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:23.903797    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:23.903797    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:23.903797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:23.943716    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:23.943716    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:26.496986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:26.519387    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:26.546439    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.546439    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:26.550311    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:26.579658    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.579658    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:26.583767    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:26.611690    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.611690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:26.616096    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:26.642773    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.642773    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:26.646291    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:26.674086    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.674086    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:26.677423    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:26.705896    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.705896    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:26.709747    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:26.736563    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.736563    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:26.736563    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:26.736563    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:26.797921    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:26.797921    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:26.827915    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:26.827915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:26.912180    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:26.912180    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:26.912180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:26.952784    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:26.952784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.506291    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:29.528153    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:29.558126    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.558126    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:29.562358    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:29.592320    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.592320    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:29.596049    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:29.628556    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.628556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:29.632809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:29.657311    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.657311    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:29.661781    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:29.690232    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.690261    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:29.693735    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:29.722288    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.722288    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:29.725599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:29.757022    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.757022    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:29.757057    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:29.757057    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:29.838684    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:29.838684    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:29.840075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:29.881968    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:29.881968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.937264    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:29.937264    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:30.003954    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:30.003954    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:32.543156    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:32.567379    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:32.595089    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.595089    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:32.599147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:32.627893    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.627962    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:32.631484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:32.658969    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.658969    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:32.662719    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:32.689837    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.689837    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:32.693526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:32.719931    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.719931    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:32.723427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:32.754044    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.754044    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:32.757365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:32.785242    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.785242    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:32.785242    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:32.785242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:32.866344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:32.866344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:32.866344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:32.910000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:32.910000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:32.959713    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:32.959713    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:33.023739    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:33.023739    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:35.563488    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:35.587848    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:35.619497    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.619497    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:35.625107    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:35.653936    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.653936    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:35.657619    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:35.684524    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.684524    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:35.687685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:35.718759    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.718759    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:35.722575    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:35.749655    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.749655    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:35.753297    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:35.780974    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.780974    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:35.784685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:35.810182    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.810182    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:35.810182    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:35.810182    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:35.892605    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:35.892605    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:35.892605    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:35.932890    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:35.932890    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:35.985679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:35.985679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:36.046361    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:36.046361    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:38.583800    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:38.606814    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:38.638211    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.638211    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:38.642266    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:38.669848    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.669848    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:38.673886    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:38.700984    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.700984    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:38.705078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:38.729910    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.729910    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:38.733986    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:38.760705    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.760705    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:38.765121    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:38.799915    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.799915    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:38.804009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:38.833364    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.833364    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:38.833364    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:38.833364    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:38.913728    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:38.914694    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:38.914694    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:38.953812    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:38.953812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:38.999712    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:38.999712    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:39.060789    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:39.060789    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:41.597593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:41.620430    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:41.650082    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.650082    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:41.653991    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:41.681237    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.681306    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:41.684963    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:41.713795    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.713795    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:41.719712    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:41.749037    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.749037    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:41.753070    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:41.779427    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.779427    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:41.783501    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:41.815751    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.815751    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:41.819560    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:41.847881    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.847881    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:41.847881    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:41.847931    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:41.927320    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:41.927320    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:41.927320    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:41.970940    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:41.970940    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:42.027555    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:42.027555    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:42.089451    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:42.089451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.625751    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:44.648990    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:44.676551    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.676585    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:44.679722    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:44.709172    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.709172    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:44.713304    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:44.743046    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.743046    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:44.748526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:44.778521    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.778521    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:44.782734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:44.814603    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.814603    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:44.817683    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:44.845948    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.845948    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:44.849265    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:44.879812    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.879812    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:44.879812    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:44.879812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:44.944127    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:44.944127    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.974113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:44.974113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:45.057102    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:45.057102    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:45.057102    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:45.100139    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:45.100139    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.652183    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:47.675849    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:47.706239    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.706239    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:47.709475    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:47.741233    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.741233    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:47.744861    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:47.774055    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.774055    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:47.777505    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:47.805794    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.805794    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:47.808964    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:47.836392    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.836392    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:47.841779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:47.870715    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.870715    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:47.874288    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:47.901831    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.901831    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:47.901831    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:47.901831    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:47.944346    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:47.944346    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.988778    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:47.988778    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:48.052537    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:48.052537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:48.083339    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:48.083339    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:48.169498    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:50.675888    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:50.695141    1528 kubeadm.go:602] duration metric: took 4m2.9691176s to restartPrimaryControlPlane
	W1212 20:10:50.695255    1528 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:10:50.699541    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:10:51.173784    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:51.196593    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:51.210961    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:51.215040    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:51.228862    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:51.228862    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:51.232787    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:10:51.246730    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:51.251357    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:51.268580    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:10:51.283713    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:51.288367    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:51.308779    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.322868    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:51.327510    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.347243    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:10:51.360015    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:51.365274    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:51.383196    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:51.503494    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:10:51.590365    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:10:51.685851    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:14:52.890657    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:14:52.890657    1528 kubeadm.go:319] 
	I1212 20:14:52.891189    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:14:52.897133    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:14:52.897133    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:14:52.898464    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:14:52.898582    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:14:52.898779    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:14:52.898920    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:14:52.899045    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:14:52.899131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:14:52.899262    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:14:52.899432    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:14:52.899517    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:14:52.899644    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:14:52.899729    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:14:52.899847    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:14:52.900038    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:14:52.900217    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:14:52.900390    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:14:52.900502    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:14:52.900574    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:14:52.900710    1528 kubeadm.go:319] OS: Linux
	I1212 20:14:52.900833    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:14:52.900915    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:14:52.901708    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:14:52.901818    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:14:52.906810    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:14:52.908849    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:14:52.908909    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:14:52.912070    1528 out.go:252]   - Booting up control plane ...
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:14:52.914083    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000441542s
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 
	W1212 20:14:52.915069    1528 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:14:52.921774    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:14:53.390305    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:14:53.408818    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:14:53.413243    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:14:53.425325    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:14:53.425325    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:14:53.430625    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:14:53.442895    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:14:53.446965    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:14:53.464658    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:14:53.478038    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:14:53.482805    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:14:53.499083    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.513919    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:14:53.518566    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.538555    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:14:53.552479    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:14:53.557205    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:14:53.576642    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:14:53.698383    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:14:53.775189    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:14:53.868267    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:18:54.359522    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:18:54.359522    1528 kubeadm.go:319] 
	I1212 20:18:54.359522    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:18:54.362954    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:18:54.363173    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:18:54.363383    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:18:54.363609    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:18:54.364132    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:18:54.364950    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:18:54.365662    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:18:54.365743    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:18:54.365828    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:18:54.365917    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:18:54.366005    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:18:54.366087    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:18:54.366168    1528 kubeadm.go:319] OS: Linux
	I1212 20:18:54.366224    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:18:54.366255    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:18:54.366823    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:18:54.366960    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:18:54.367127    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:18:54.367127    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:18:54.369422    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:18:54.369953    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:18:54.370159    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:18:54.370228    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:18:54.370309    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:18:54.370471    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:18:54.370639    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:18:54.371251    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:18:54.371313    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:18:54.371344    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:18:54.374291    1528 out.go:252]   - Booting up control plane ...
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:18:54.375259    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000961807s
	I1212 20:18:54.375259    1528 kubeadm.go:319] 
	I1212 20:18:54.376246    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:18:54.376246    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:403] duration metric: took 12m6.6943451s to StartCluster
	I1212 20:18:54.376405    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:18:54.380250    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:18:54.441453    1528 cri.go:89] found id: ""
	I1212 20:18:54.441453    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.441453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:18:54.441453    1528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:18:54.446414    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:18:54.508794    1528 cri.go:89] found id: ""
	I1212 20:18:54.508794    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.508794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:18:54.508794    1528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:18:54.513698    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:18:54.553213    1528 cri.go:89] found id: ""
	I1212 20:18:54.553257    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.553257    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:18:54.553295    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:18:54.558235    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:18:54.603262    1528 cri.go:89] found id: ""
	I1212 20:18:54.603262    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.603262    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:18:54.603262    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:18:54.608185    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:18:54.648151    1528 cri.go:89] found id: ""
	I1212 20:18:54.648151    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.648151    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:18:54.648151    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:18:54.652647    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:18:54.693419    1528 cri.go:89] found id: ""
	I1212 20:18:54.693419    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.693419    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:18:54.693419    1528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:18:54.697661    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:18:54.737800    1528 cri.go:89] found id: ""
	I1212 20:18:54.737800    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.737800    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:18:54.737858    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:18:54.737858    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:18:54.790460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:18:54.790460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:18:54.852887    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:18:54.852887    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:18:54.883744    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:18:54.883744    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:18:54.965870    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:18:54.965870    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:18:54.965870    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 20:18:55.009075    1528 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.009075    1528 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.011173    1528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:18:55.016858    1528 out.go:203] 
	W1212 20:18:55.021226    1528 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.021226    1528 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:18:55.021226    1528 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:18:55.024694    1528 out.go:203] 
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:56.883016   40798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:56.884245   40798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:56.885759   40798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:56.888283   40798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:56.889647   40798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:18:56 up  1:20,  0 user,  load average: 0.11, 0.27, 0.42
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:18:53 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:18:54 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 20:18:54 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:54 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:54 functional-468800 kubelet[40526]: E1212 20:18:54.452816   40526 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:18:54 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:18:54 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 20:18:55 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:55 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:55 functional-468800 kubelet[40651]: E1212 20:18:55.254196   40651 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 20:18:55 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:55 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:55 functional-468800 kubelet[40679]: E1212 20:18:55.955349   40679 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:18:55 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:18:56 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 20:18:56 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:56 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:18:56 functional-468800 kubelet[40739]: E1212 20:18:56.710291   40739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:18:56 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:18:56 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (585.3094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (740.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-468800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-468800 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.356078s)

                                                
                                                
** stderr ** 
	E1212 20:19:08.824444    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:19:18.909803    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:19:28.955814    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:19:38.996342    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:19:49.042367    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-468800 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (579.3592ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.2244709s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-461000 image ls                                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format json --alsologtostderr                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format table --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ image   │ functional-461000 image ls --format short --alsologtostderr                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │ 12 Dec 25 19:43 UTC │
	│ service │ functional-461000 service hello-node --url --format={{.IP}}                                                             │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:43 UTC │                     │
	│ service │ functional-461000 service hello-node --url                                                                              │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:44 UTC │                     │
	│ delete  │ -p functional-461000                                                                                                    │ functional-461000 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │ 12 Dec 25 19:48 UTC │
	│ start   │ -p functional-468800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:48 UTC │                     │
	│ start   │ -p functional-468800 --alsologtostderr -v=8                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:57 UTC │                     │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add registry.k8s.io/pause:latest                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache add minikube-local-cache-test:functional-468800                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ functional-468800 cache delete minikube-local-cache-test:functional-468800                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl images                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ cache   │ functional-468800 cache reload                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ kubectl │ functional-468800 kubectl -- --context functional-468800 get pods                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ start   │ -p functional-468800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:06:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:06:38.727985    1528 out.go:360] Setting OutFile to fd 1056 ...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.773098    1528 out.go:374] Setting ErrFile to fd 1212...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.787709    1528 out.go:368] Setting JSON to false
	I1212 20:06:38.790304    1528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4136,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:06:38.790304    1528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:06:38.796304    1528 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:06:38.800290    1528 notify.go:221] Checking for updates...
	I1212 20:06:38.800290    1528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:06:38.802303    1528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:06:38.805306    1528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:06:38.807332    1528 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:06:38.808856    1528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:06:38.812430    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:38.812430    1528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:06:38.929707    1528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:06:38.933677    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.195122    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.177384092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.201119    1528 out.go:179] * Using the docker driver based on existing profile
	I1212 20:06:39.203117    1528 start.go:309] selected driver: docker
	I1212 20:06:39.203117    1528 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.203117    1528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:06:39.209122    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.449342    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.430307853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.528922    1528 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:06:39.529468    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:39.529468    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:39.529468    1528 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.533005    1528 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 20:06:39.535095    1528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 20:06:39.537607    1528 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:06:39.540959    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:39.540959    1528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:06:39.540959    1528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 20:06:39.540959    1528 cache.go:65] Caching tarball of preloaded images
	I1212 20:06:39.541554    1528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 20:06:39.541554    1528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 20:06:39.541554    1528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 20:06:39.619509    1528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:06:39.619509    1528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:06:39.619509    1528 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:06:39.619509    1528 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:06:39.619509    1528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 20:06:39.620041    1528 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:06:39.620041    1528 fix.go:54] fixHost starting: 
	I1212 20:06:39.627157    1528 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 20:06:39.683014    1528 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 20:06:39.683376    1528 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:06:39.686124    1528 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 20:06:39.686124    1528 machine.go:94] provisionDockerMachine start ...
	I1212 20:06:39.689814    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.744908    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.745476    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.745476    1528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:06:39.930965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:39.931078    1528 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 20:06:39.934795    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.989752    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.990452    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.990452    1528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 20:06:40.176756    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:40.180410    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.235554    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.236742    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.236742    1528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:06:40.410965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:40.410965    1528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 20:06:40.410965    1528 ubuntu.go:190] setting up certificates
	I1212 20:06:40.410965    1528 provision.go:84] configureAuth start
	I1212 20:06:40.414835    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:40.468680    1528 provision.go:143] copyHostCerts
	I1212 20:06:40.468680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 20:06:40.468680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 20:06:40.468680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 20:06:40.469680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 20:06:40.469680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 20:06:40.469680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 20:06:40.470682    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 20:06:40.470682    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 20:06:40.470682    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 20:06:40.471679    1528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 20:06:40.521679    1528 provision.go:177] copyRemoteCerts
	I1212 20:06:40.526217    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:06:40.529224    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.578843    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:40.705122    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:06:40.732235    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:06:40.758034    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:06:40.787536    1528 provision.go:87] duration metric: took 376.5012ms to configureAuth
	I1212 20:06:40.787564    1528 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:06:40.788016    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:40.791899    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.847433    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.847433    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.847433    1528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 20:06:41.031514    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 20:06:41.031514    1528 ubuntu.go:71] root file system type: overlay
	I1212 20:06:41.031514    1528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 20:06:41.035525    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.089326    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.090065    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.090155    1528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 20:06:41.283431    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 20:06:41.287473    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.343081    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.343562    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.343562    1528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 20:06:41.525616    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:41.525616    1528 machine.go:97] duration metric: took 1.8394714s to provisionDockerMachine
	I1212 20:06:41.525616    1528 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 20:06:41.525616    1528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:06:41.530519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:06:41.534083    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.586502    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.720007    1528 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:06:41.727943    1528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:06:41.727943    1528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 20:06:41.728602    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 20:06:41.729437    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 20:06:41.733519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 20:06:41.745958    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 20:06:41.772738    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 20:06:41.802626    1528 start.go:296] duration metric: took 277.0071ms for postStartSetup
	I1212 20:06:41.807164    1528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:06:41.809505    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.864695    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.985729    1528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:06:41.994649    1528 fix.go:56] duration metric: took 2.3745808s for fixHost
	I1212 20:06:41.994649    1528 start.go:83] releasing machines lock for "functional-468800", held for 2.3751133s
	I1212 20:06:41.998707    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:42.059230    1528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 20:06:42.063903    1528 ssh_runner.go:195] Run: cat /version.json
	I1212 20:06:42.063903    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.066691    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.116356    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:42.117357    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 20:06:42.228585    1528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 20:06:42.232646    1528 ssh_runner.go:195] Run: systemctl --version
	I1212 20:06:42.247485    1528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:06:42.257236    1528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:06:42.263875    1528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:06:42.279473    1528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:06:42.279473    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.279473    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.283549    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:42.307873    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:06:42.326439    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:06:42.341366    1528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:06:42.345268    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 20:06:42.347179    1528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 20:06:42.347179    1528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 20:06:42.365551    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.385740    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:06:42.407021    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.427172    1528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:06:42.448213    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:06:42.467444    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:06:42.487296    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:06:42.507050    1528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:06:42.524437    1528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:06:42.541928    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:42.701987    1528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:06:42.867618    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.867618    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.872524    1528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 20:06:42.900833    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:42.922770    1528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:06:42.982495    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:43.005292    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:06:43.026719    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:43.052829    1528 ssh_runner.go:195] Run: which cri-dockerd
	I1212 20:06:43.064606    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 20:06:43.079549    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 20:06:43.104999    1528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 20:06:43.240280    1528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 20:06:43.379193    1528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 20:06:43.379358    1528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 20:06:43.405761    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 20:06:43.427392    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:43.565288    1528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 20:06:44.374705    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:06:44.396001    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 20:06:44.418749    1528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 20:06:44.445721    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:44.466663    1528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 20:06:44.598807    1528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 20:06:44.740962    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:44.883493    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 20:06:44.907977    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 20:06:44.931006    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.071046    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 20:06:45.171465    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:45.190143    1528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 20:06:45.194535    1528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 20:06:45.202518    1528 start.go:564] Will wait 60s for crictl version
	I1212 20:06:45.206873    1528 ssh_runner.go:195] Run: which crictl
	I1212 20:06:45.221614    1528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:06:45.263002    1528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 20:06:45.266767    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.308717    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.348580    1528 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 20:06:45.352493    1528 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 20:06:45.482840    1528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 20:06:45.487311    1528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 20:06:45.498523    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:45.552748    1528 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:06:45.554383    1528 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:06:45.554933    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:45.558499    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.589105    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.589105    1528 docker.go:621] Images already preloaded, skipping extraction
	I1212 20:06:45.592742    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.625313    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.625313    1528 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:06:45.625313    1528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 20:06:45.625829    1528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:06:45.629232    1528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 20:06:45.698056    1528 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:06:45.698078    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:45.698133    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:45.698180    1528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:06:45.698180    1528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:06:45.698180    1528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:06:45.702170    1528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:06:45.714209    1528 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:06:45.719390    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:06:45.731628    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 20:06:45.753236    1528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:06:45.772644    1528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1212 20:06:45.798125    1528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:06:45.809796    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.998447    1528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:06:46.682417    1528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 20:06:46.682417    1528 certs.go:195] generating shared ca certs ...
	I1212 20:06:46.682417    1528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:06:46.683216    1528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 20:06:46.683331    1528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 20:06:46.683331    1528 certs.go:257] generating profile certs ...
	I1212 20:06:46.683996    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 20:06:46.685029    1528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 20:06:46.685554    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 20:06:46.686999    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:06:46.715172    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:06:46.745329    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:06:46.775248    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:06:46.804288    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:06:46.833541    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:06:46.858974    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:06:46.883320    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:06:46.912462    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:06:46.937010    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 20:06:46.963968    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 20:06:46.987545    1528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:06:47.014201    1528 ssh_runner.go:195] Run: openssl version
	I1212 20:06:47.028684    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.047532    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:06:47.066889    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.074545    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.078818    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.128719    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:06:47.145523    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.162300    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 20:06:47.179220    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.188551    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.193732    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.241331    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:06:47.258219    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.276085    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 20:06:47.293199    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.300084    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.304026    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.352991    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:06:47.371677    1528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:06:47.384558    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:06:47.433291    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:06:47.480566    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:06:47.530653    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:06:47.582068    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:06:47.630287    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:06:47.673527    1528 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:47.678147    1528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.710789    1528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:06:47.723256    1528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:06:47.723256    1528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:06:47.727283    1528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:06:47.740989    1528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.744500    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:47.805147    1528 kubeconfig.go:125] found "functional-468800" server: "https://127.0.0.1:55778"
	I1212 20:06:47.813022    1528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:06:47.830078    1528 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:49:17.606323144 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:06:45.789464240 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:06:47.830078    1528 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:06:47.833739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.872403    1528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:06:47.898698    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:06:47.911626    1528 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 19:53 /etc/kubernetes/scheduler.conf
	
	I1212 20:06:47.916032    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:06:47.934293    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:06:47.947871    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.952020    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:06:47.971701    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:06:47.986795    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.991166    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:06:48.008021    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:06:48.023761    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:48.029138    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:06:48.047659    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:06:48.063995    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.141323    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.685789    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.933405    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.007626    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.088118    1528 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:06:49.091668    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:49.594772    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.093859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.594422    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.093806    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.593915    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.093893    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.594038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.093417    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.593495    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.594146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.095283    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.594629    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.094166    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.593508    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.093792    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.594191    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.094043    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.593447    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.095461    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.594593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.093887    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.593742    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.093796    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.593635    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.594164    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.094112    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.593477    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.093750    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.595391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.094206    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.595179    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.094740    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.594021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.092923    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.594420    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.093543    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.593353    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.093866    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.594009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.593564    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.594786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.093907    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.595728    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.095070    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.594017    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.094874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.595001    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.094580    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.594646    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.095074    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.594850    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.094067    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.594147    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.094262    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.594277    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.094229    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.593986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.093873    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.593102    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.093881    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.594308    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.093613    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.594040    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.094021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.594274    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.093605    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.594142    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.094736    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.593265    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.094197    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.594872    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.095670    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.093920    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.596679    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.094004    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.594458    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.093715    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.594515    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.094349    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.594711    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.094230    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.594083    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.093810    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.595024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.094786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.594107    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.094421    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.594761    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.095704    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.596396    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.094385    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.593669    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.094137    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.595560    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.094405    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.595146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.094116    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.595721    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.096666    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.595141    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.094696    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.595232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.094232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.595329    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.094121    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.594251    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.094024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.594712    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.094868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.594370    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.093917    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.594667    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:49.093256    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:49.126325    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.126325    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:49.130353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:49.158022    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.158022    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:49.162811    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:49.190525    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.190525    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:49.194310    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:49.220030    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.220030    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:49.223677    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:49.249986    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.249986    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:49.253970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:49.282441    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.282441    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:49.286057    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:49.315225    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.315248    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:49.315306    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:49.315306    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:49.374436    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:49.374436    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:49.404204    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:49.404204    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:49.493575    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:49.493575    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:49.493575    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:49.537752    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:49.537752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.109985    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:52.133820    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:52.164388    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.164388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:52.168109    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:52.195605    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.195605    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:52.199164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:52.229188    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.229188    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:52.232745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:52.256990    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.256990    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:52.261539    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:52.290862    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.290862    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:52.294555    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:52.324957    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.324957    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:52.330284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:52.359197    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.359197    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:52.359197    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:52.359197    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:52.386524    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:52.386524    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:52.470690    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:52.470690    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:52.470690    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:52.511513    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:52.511513    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.560676    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:52.560676    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.127058    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:55.150663    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:55.181456    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.181456    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:55.184641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:55.217269    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.217269    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:55.220911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:55.250346    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.250346    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:55.254082    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:55.285676    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.285706    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:55.288968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:55.315854    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.315854    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:55.319386    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:55.348937    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.348937    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:55.352894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:55.380789    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.380853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:55.380853    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:55.380883    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:55.463944    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:55.463944    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:55.463944    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:55.507780    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:55.507780    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:55.561906    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:55.561906    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.623372    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:55.623372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.160009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:58.184039    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:58.215109    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.215109    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:58.218681    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:58.247778    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.247778    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:58.251301    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:58.278710    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.278710    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:58.282296    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:58.308953    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.308953    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:58.312174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:58.339973    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.340049    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:58.343731    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:58.374943    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.374943    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:58.378660    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:58.405372    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.405372    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:58.405372    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:58.405372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:58.453718    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:58.453718    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:58.514502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:58.514502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.544394    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:58.544394    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:58.623232    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:58.623232    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:58.623232    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.169113    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:01.192583    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:01.222434    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.222434    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:01.225873    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:01.253020    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.253020    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:01.257395    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:01.286407    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.286407    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:01.290442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:01.317408    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.317408    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:01.321138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:01.348820    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.348820    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:01.352926    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:01.383541    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.383541    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:01.387373    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:01.415400    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.415431    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:01.415431    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:01.415466    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:01.481183    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:01.481183    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:01.512132    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:01.512132    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:01.598560    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:01.598601    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:01.598601    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.641848    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:01.641848    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.202764    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:04.225393    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:04.257048    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.257048    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:04.261463    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:04.289329    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.289329    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:04.295911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:04.324136    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.324205    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:04.329272    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:04.355941    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.355941    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:04.359744    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:04.389386    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.389461    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:04.393063    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:04.421465    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.421465    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:04.425377    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:04.454159    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.454159    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:04.454185    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:04.454221    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:04.499238    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:04.499238    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.546668    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:04.546668    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:04.614181    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:04.614181    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:04.646155    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:04.646155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:04.746527    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.252038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:07.276838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:07.307770    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.307770    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:07.311473    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:07.338086    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.338086    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:07.343809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:07.373687    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.373687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:07.377399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:07.406083    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.406083    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:07.409835    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:07.437651    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.437651    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:07.441428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:07.468369    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.468369    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:07.472164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:07.503047    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.503047    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:07.503047    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:07.503811    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:07.531856    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:07.531856    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:07.618451    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.618451    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:07.618451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:07.661072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:07.661072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:07.708185    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:07.708185    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.277741    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:10.301882    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:10.334646    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.334646    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:10.338176    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:10.369543    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.369543    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:10.372853    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:10.405159    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.405159    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:10.408623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:10.436491    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.436491    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:10.440653    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:10.471674    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.471674    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:10.475616    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:10.503923    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.503923    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:10.507960    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:10.532755    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.532755    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:10.532755    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:10.532755    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.596502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:10.596502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:10.627352    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:10.627352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:10.716582    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:10.716582    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:10.716582    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:10.758177    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:10.758177    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.312261    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:13.336629    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:13.366321    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.366321    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:13.370440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:13.398643    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.398643    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:13.402381    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:13.432456    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.432481    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:13.436213    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:13.464635    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.464711    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:13.468308    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:13.495284    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.495284    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:13.499271    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:13.528325    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.528325    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:13.531787    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:13.562227    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.562227    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:13.562227    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:13.562227    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:13.663593    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:13.663593    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:13.663593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:13.704702    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:13.704702    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.753473    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:13.753473    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:13.816534    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:13.816534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.353541    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:16.376390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:16.407214    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.407214    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:16.410992    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:16.441225    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.441225    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:16.444710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:16.474803    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.474803    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:16.478736    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:16.507490    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.507490    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:16.510890    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:16.542100    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.542196    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:16.546032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:16.575799    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.575799    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:16.579959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:16.607409    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.607409    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:16.607409    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:16.607409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.635159    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:16.635159    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:16.716319    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:16.716319    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:16.716319    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:16.759176    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:16.759176    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:16.808150    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:16.808180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.374586    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:19.397466    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:19.428699    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.428699    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:19.432104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:19.459357    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.459357    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:19.463506    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:19.492817    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.492862    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:19.496262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:19.524604    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.524633    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:19.528245    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:19.554030    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.554030    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:19.557659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:19.585449    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.585449    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:19.589270    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:19.617715    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.617715    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:19.617715    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:19.617715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:19.665679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:19.665679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.731378    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:19.731378    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:19.760660    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:19.760660    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:19.846488    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:19.846488    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:19.846534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.396054    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:22.420446    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:22.451208    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.451246    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:22.455255    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:22.482900    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.482900    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:22.486411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:22.515383    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.515383    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:22.518824    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:22.550034    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.550034    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:22.553623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:22.581020    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.581020    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:22.585628    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:22.612869    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.612869    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:22.616928    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:22.644472    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.644472    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:22.644472    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:22.644472    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:22.708075    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:22.708075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:22.738243    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:22.738270    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:22.821664    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:22.821664    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:22.821664    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.864165    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:22.864165    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.420933    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:25.445913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:25.482750    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.482780    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:25.486866    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:25.513327    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.513327    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:25.516888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:25.544296    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.544296    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:25.547411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:25.577831    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.577831    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:25.581764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:25.611577    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.611577    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:25.614994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:25.643683    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.643683    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:25.647543    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:25.673764    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.673764    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:25.673764    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:25.673764    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:25.756845    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:25.756845    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:25.756845    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:25.796355    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:25.796355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.848330    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:25.848330    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:25.908271    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:25.908271    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:28.444198    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:28.466730    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:28.495218    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.496317    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:28.499838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:28.526946    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.526946    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:28.531098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:28.558957    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.558957    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:28.563084    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:28.591401    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.591401    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:28.594622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:28.621536    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.621536    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:28.625599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:28.652819    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.652819    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:28.655938    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:28.684007    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.684007    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:28.684049    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:28.684049    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:28.766993    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:28.766993    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:28.766993    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:28.808427    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:28.808427    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:28.854005    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:28.854005    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:28.915072    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:28.915072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.448340    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:31.482817    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:31.516888    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.516948    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:31.520762    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:31.548829    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.548829    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:31.552634    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:31.580202    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.580202    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:31.583832    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:31.612644    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.612644    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:31.616408    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:31.641662    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.641662    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:31.645105    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:31.674858    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.674858    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:31.678481    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:31.708742    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.708742    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:31.708742    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:31.708742    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.737537    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:31.737537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:31.815915    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:31.815915    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:31.815915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:31.855387    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:31.855387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:31.902882    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:31.902882    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.468874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:34.492525    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:34.524158    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.524158    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:34.528390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:34.555356    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.555356    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:34.558734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:34.589102    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.589171    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:34.592795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:34.621829    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.621829    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:34.625204    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:34.653376    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.653376    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:34.657009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:34.683738    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.683738    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:34.686742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:34.714674    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.714674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:34.714674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:34.714674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.779026    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:34.779026    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:34.808978    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:34.808978    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:34.892063    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:34.892063    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:34.892063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:34.931531    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:34.931531    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:37.485139    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:37.507669    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:37.539156    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.539156    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:37.543011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:37.573040    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.573040    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:37.576524    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:37.606845    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.606845    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:37.610640    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:37.637362    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.637362    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:37.640345    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:37.667170    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.667203    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:37.670535    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:37.699517    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.699517    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:37.703317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:37.728898    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.728898    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:37.728898    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:37.728898    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:37.794369    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:37.794369    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:37.824287    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:37.824287    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:37.909344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:37.909344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:37.909344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:37.954162    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:37.954162    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.506487    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:40.531085    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:40.562228    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.562228    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:40.566239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:40.592782    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.592782    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:40.597032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:40.623771    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.623771    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:40.627181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:40.653272    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.653272    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:40.657007    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:40.684331    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.684331    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:40.687951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:40.717873    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.718396    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:40.722742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:40.750968    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.750968    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:40.750968    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:40.750968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:40.780652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:40.780652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.862566    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.862566    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:40.862566    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:40.901731    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.901731    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.950141    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.950141    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.517065    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:43.542117    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:43.570769    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.570769    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:43.574614    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:43.606209    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.606209    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:43.610144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:43.636742    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.636742    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:43.640713    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:43.671147    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.671166    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:43.675284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:43.702707    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.702707    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.709331    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:43.739560    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.739560    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:43.743495    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:43.773460    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.773460    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.773460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.773460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.839426    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.839426    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.869067    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.869067    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.956418    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.956418    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:43.956418    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:43.999225    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.999225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.559969    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:46.583306    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:46.616304    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.616304    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:46.620185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:46.649980    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.649980    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.653901    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:46.679706    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.679706    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.683349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:46.709377    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.709377    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:46.713435    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:46.743714    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.743714    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.747353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:46.774831    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.774831    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:46.778444    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:46.803849    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.803849    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.803849    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:46.803849    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:46.846976    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.898873    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.898873    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.960800    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.960800    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.992131    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.992131    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:47.078211    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.584391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:49.609888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:49.644530    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.644530    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:49.648078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:49.676237    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.676237    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.680633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:49.711496    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.711496    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.714503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:49.741598    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.741598    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:49.746023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:49.774073    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.774073    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.780499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:49.807422    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.807422    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:49.811492    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:49.837105    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.837105    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.837105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.837105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.919888    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.919888    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:49.919888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:49.961375    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.961375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:50.029040    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:50.029040    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:50.091715    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:50.091715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:52.626760    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:52.650138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:52.682125    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.682125    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:52.685499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:52.716677    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.716677    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.720251    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:52.750215    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.750215    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.753203    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:52.783410    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.783410    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:52.786745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:52.816028    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.816028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.819028    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:52.847808    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.847808    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:52.851676    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:52.880388    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.880388    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.880388    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:52.880388    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:52.927060    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.927060    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.980540    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.980540    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.040013    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.040013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.068682    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.068682    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:53.153542    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:55.659454    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:55.682885    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:55.711696    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.711696    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:55.718399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:55.746229    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.746229    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.750441    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:55.780178    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.780210    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.784012    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:55.811985    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.811985    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:55.816792    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:55.847996    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.847996    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:55.851745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:55.883521    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.883521    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:55.886915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:55.914853    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.914853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:55.914853    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:55.914853    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:55.960920    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:55.960920    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.026011    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.026011    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.053113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.053113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.136578    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:56.136578    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:56.136578    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:58.683199    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:58.705404    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:58.735584    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.735584    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:58.739795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:58.770569    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.770569    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:58.774526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:58.804440    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.804440    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:58.808498    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:58.836009    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.836009    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:58.840208    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:58.869192    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.869192    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:58.872945    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:58.902237    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.902237    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:58.905993    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:58.933450    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.933617    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:58.933617    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:58.933617    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:58.976315    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:58.976391    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:59.038199    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.038199    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.068976    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.068976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.160516    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.160516    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:59.160516    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:01.709859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:01.733860    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:01.762957    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.762957    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:01.766889    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:01.793351    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.793351    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:01.797156    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:01.823801    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.823801    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:01.827545    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:01.858811    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.858811    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:01.862667    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:01.888526    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.888601    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:01.892330    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:01.921800    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.921834    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:01.925710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:01.954630    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.954630    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:01.954630    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:01.954630    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.019929    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.019929    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.050304    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.050304    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.137016    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.137016    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:02.137016    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:02.181380    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.181380    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:04.738393    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:04.761261    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:04.788560    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.788594    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:04.792550    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:04.822339    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.822339    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:04.826135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:04.854461    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.854531    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:04.858147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:04.886243    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.886243    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:04.890144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:04.918123    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.918123    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:04.922152    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:04.949493    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.949557    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:04.953111    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:04.980390    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.980390    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:04.980390    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:04.980390    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.043888    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.043888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.075474    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.075474    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.156773    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.156773    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:05.156773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:05.198847    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.198847    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:07.752600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.774442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:07.801273    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.801315    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:07.804806    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:07.833315    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.833315    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:07.837119    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:07.866393    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.866417    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:07.869980    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:07.898480    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.898480    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:07.902426    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:07.929231    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.929231    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:07.932443    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:07.962786    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.962786    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:07.966343    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:07.993681    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.993681    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:07.993681    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:07.993681    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.075996    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.075996    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:08.075996    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:08.115751    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:08.115751    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:08.167959    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:08.167959    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:08.229990    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:08.229990    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:10.765802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:10.787970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:10.817520    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.817520    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:10.821188    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:10.850905    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.850905    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:10.854741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:10.882098    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.882098    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:10.885759    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:10.915908    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.915931    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:10.919484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:10.947704    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.947704    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:10.951840    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:10.979998    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.979998    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:10.983440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:11.012620    1528 logs.go:282] 0 containers: []
	W1212 20:09:11.012620    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:11.012620    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:11.012620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:11.075910    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:11.075910    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.105013    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:11.105013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:11.184242    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:11.184242    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:11.184242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:11.228072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:11.228072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:13.782352    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.806071    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:13.835380    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.835380    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:13.839913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:13.866644    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.866644    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:13.870648    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:13.900617    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.900687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:13.904431    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:13.928026    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.928026    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:13.931830    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:13.961813    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.961813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:13.965790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:13.993658    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.993658    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:13.997303    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:14.025708    1528 logs.go:282] 0 containers: []
	W1212 20:09:14.025708    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:14.025708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:14.025708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:14.106478    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:14.106478    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:14.106478    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:14.148128    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:14.148128    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:14.203808    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:14.203885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:14.267083    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:14.267083    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:16.803844    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:16.828076    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:16.857370    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.857370    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:16.861602    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:16.888928    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.888928    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.892594    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:16.918950    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.918950    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.922184    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:16.949697    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.949697    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:16.953615    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:16.980582    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.980582    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.984239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:17.011537    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.011537    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:17.015236    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:17.044025    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.044025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.044059    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.044059    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.108593    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.108593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.140984    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.140984    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:17.223600    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:17.223647    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:17.223647    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:17.265808    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.265808    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:19.827665    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:19.848754    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:19.880440    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.880440    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:19.884631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:19.911688    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.911688    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:19.915503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:19.942894    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.942894    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:19.946623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:19.974622    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.974622    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:19.978983    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:20.005201    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.005201    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:20.009244    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:20.040298    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.040298    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:20.043935    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:20.073267    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.073267    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:20.073267    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:20.073267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:20.139351    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:20.139351    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:20.170692    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:20.170692    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:20.255758    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:20.255758    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:20.255758    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:20.296082    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:20.296082    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:22.852656    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:22.877113    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:22.907531    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.907601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:22.911006    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:22.938103    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.938103    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:22.941741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:22.969757    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.969757    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:22.973641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:23.003718    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.003718    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:23.007427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:23.034105    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.034105    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:23.038551    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:23.068440    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.068440    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:23.072250    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:23.099797    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.099797    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:23.099797    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:23.099797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:23.127441    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:23.127441    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:23.213420    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:23.213420    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:23.213420    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:23.258155    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:23.258155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:23.304413    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:23.304413    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:25.871188    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:25.894216    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:25.924994    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.924994    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:25.928893    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:25.956143    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.956143    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:25.961174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:25.988898    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.988898    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:25.993364    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:26.021169    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.021233    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:26.024829    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:26.051922    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.051922    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:26.055062    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:26.082542    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.082542    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:26.086788    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:26.117355    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.117355    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:26.117355    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:26.117355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:26.180352    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:26.180352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:26.211105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:26.211105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:26.296971    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:26.296971    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:26.296971    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:26.338711    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:26.338711    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:28.896860    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:28.920643    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:28.950389    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.950389    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:28.955391    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:28.982117    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.982117    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:28.986142    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:29.015662    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.015662    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:29.019455    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:29.049660    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.049660    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:29.053631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:29.081889    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.081889    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:29.086411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:29.114138    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.114138    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:29.119659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:29.150078    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.150078    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:29.150078    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:29.150078    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:29.214085    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:29.214085    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:29.248111    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:29.248111    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:29.331531    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:29.331531    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:29.331573    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:29.371475    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:29.371475    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:31.925581    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:31.948416    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:31.979393    1528 logs.go:282] 0 containers: []
	W1212 20:09:31.979436    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:31.982941    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:32.012671    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.012745    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:32.016490    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:32.044571    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.044571    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:32.049959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:32.077737    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.077737    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:32.082023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:32.112680    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.112680    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:32.116732    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:32.144079    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.144079    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:32.147365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:32.175674    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.175674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:32.175674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:32.175674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:32.238433    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:32.238433    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:32.268680    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:32.268680    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:32.350924    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:32.351446    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:32.351446    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:32.393409    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:32.393409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:34.949675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:34.974371    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:35.003673    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.003673    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:35.007894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:35.036794    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.036794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:35.040718    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:35.068827    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.068827    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:35.073552    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:35.101505    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.101505    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:35.105374    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:35.132637    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.132637    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:35.135977    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:35.164108    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.164108    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:35.168327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:35.196237    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.196237    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:35.196237    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:35.196237    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:35.225096    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:35.225096    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:35.310720    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:35.310720    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:35.310720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:35.352640    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:35.352640    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:35.405163    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:35.405684    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:37.970126    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:37.993740    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:38.021567    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.021567    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:38.025733    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:38.054259    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.054259    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:38.058230    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:38.091609    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.091609    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:38.094726    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:38.121402    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.121402    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:38.124780    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:38.156230    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.156230    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:38.159968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:38.187111    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.187111    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:38.191000    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:38.219114    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.219114    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:38.219114    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:38.219163    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:38.267592    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:38.267642    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:38.332291    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:38.332291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:38.362654    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:38.362654    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:38.450249    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:38.450249    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:38.450249    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.000122    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:41.025061    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:41.056453    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.056453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:41.060356    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:41.090046    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.090046    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:41.096769    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:41.124375    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.124375    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:41.128276    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:41.155835    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.155835    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:41.159800    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:41.188748    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.188748    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:41.193110    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:41.220152    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.220152    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:41.224010    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:41.252532    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.252532    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:41.252532    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:41.252532    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:41.316983    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:41.316983    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:41.347558    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:41.347558    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:41.428225    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:41.428225    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:41.428225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.470919    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:41.470919    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:44.030446    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:44.055047    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:44.084459    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.084459    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:44.088206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:44.117052    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.117052    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:44.120537    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:44.147556    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.147556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:44.152098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:44.180075    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.180075    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:44.183790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:44.210767    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.210767    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:44.214367    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:44.240217    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.240217    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:44.244696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:44.273318    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.273318    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:44.273318    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:44.273371    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:44.339517    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:44.339517    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:44.369771    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:44.369771    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:44.450064    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:44.450064    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:44.450064    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:44.493504    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:44.493504    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:47.062950    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:47.087994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:47.118381    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.118409    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:47.121556    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:47.150429    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.150429    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:47.154790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:47.182604    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.182604    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:47.186262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:47.213354    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.213354    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:47.217174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:47.246442    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.246442    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:47.251292    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:47.280336    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.280336    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:47.283865    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:47.311245    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.311323    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:47.311323    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:47.311323    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:47.374063    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:47.374063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:47.404257    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:47.404257    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:47.493784    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:47.493784    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:47.493784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:47.546267    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:47.546267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:50.104321    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:50.126581    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:50.155564    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.155564    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:50.160428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:50.189268    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.189268    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:50.192916    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:50.218955    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.218955    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:50.222686    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:50.249342    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.249342    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:50.253397    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:50.283028    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.283028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:50.286951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:50.325979    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.325979    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:50.329622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:50.358362    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.358362    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:50.358362    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:50.358362    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:50.422488    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:50.422488    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:50.452652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:50.452652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:50.550551    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:50.550602    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:50.550602    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:50.590552    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:50.590552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.158722    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:53.182259    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:53.211903    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.211903    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:53.215402    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:53.243958    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.243958    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:53.247562    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:53.275751    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.275751    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:53.279763    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:53.306836    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.306836    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:53.310872    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:53.337813    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.337813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:53.341633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:53.371291    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.371291    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:53.374974    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:53.401726    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.401726    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:53.401726    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:53.401726    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:53.484480    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:53.484480    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:53.484480    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:53.548050    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:53.548050    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.599287    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:53.599439    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:53.660624    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:53.660624    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.196823    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:56.221135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:56.250407    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.250407    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:56.254016    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:56.285901    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.285901    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:56.290067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:56.318341    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.318341    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:56.321789    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:56.352739    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.352739    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:56.356470    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:56.384106    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.384106    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:56.388211    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:56.415890    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.415890    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:56.420087    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:56.447932    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.447932    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:56.447932    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:56.447932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.477708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:56.477708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:56.588387    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:56.588387    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:56.588387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:56.628140    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:56.629024    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:56.673720    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:56.673720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.242052    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:59.264739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:59.293601    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.293601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:59.297772    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:59.324701    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.324701    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:59.328642    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:59.358373    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.358373    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:59.362425    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:59.392638    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.392638    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:59.396206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:59.423777    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.423777    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:59.427998    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:59.455368    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.455368    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:59.460647    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:59.488029    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.488029    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:59.488029    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:59.488029    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.548806    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:59.548806    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:59.580620    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:59.580620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:59.670291    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:59.670291    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:59.670291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:59.715000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:59.715000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:02.271675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:02.295613    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:02.328792    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.328792    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:02.332483    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:02.364136    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.364136    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:02.368415    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:02.396018    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.396018    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:02.399987    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:02.426946    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.426946    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:02.430641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:02.457307    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.457307    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:02.461639    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:02.490776    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.490776    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:02.495011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:02.535030    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.535030    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:02.535030    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:02.535030    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:02.598020    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:02.598020    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:02.627885    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:02.627885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:02.704890    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:02.704939    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:02.704939    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:02.743781    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:02.743781    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.296529    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:05.320338    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:05.350975    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.350975    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:05.354341    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:05.384954    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.384954    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:05.389226    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:05.416593    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.416663    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:05.420370    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:05.448275    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.448306    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:05.451950    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:05.489214    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.489214    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:05.492826    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:05.542815    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.542815    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:05.546994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:05.577967    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.577967    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:05.577967    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:05.577967    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:05.666752    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:05.666752    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:05.666752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:05.710699    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:05.710699    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.761552    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:05.761552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:05.824698    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:05.824698    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.358868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:08.384185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:08.414077    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.414077    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:08.417802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:08.449585    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.449585    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:08.453707    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:08.481690    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.481690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:08.485802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:08.526849    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.526849    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:08.530588    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:08.561211    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.561211    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:08.565127    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:08.592694    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.592781    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:08.596577    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:08.625262    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.625262    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:08.625262    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:08.625335    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:08.685169    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:08.685169    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.715897    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:08.715897    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:08.803701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:08.803701    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:08.803701    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:08.843054    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:08.843054    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:11.399600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:11.423207    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:11.452824    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.452824    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:11.456632    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:11.485718    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.485718    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:11.489975    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:11.516373    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.516442    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:11.520086    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:11.550008    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.550008    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:11.553479    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:11.582422    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.582422    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:11.586067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:11.614204    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.614204    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:11.617891    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:11.647117    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.647117    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:11.647117    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:11.647117    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:11.708885    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:11.708885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:11.738490    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:11.738490    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:11.827046    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:11.827046    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:11.827107    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:11.866493    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:11.866493    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.418219    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:14.441326    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:14.471617    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.471617    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:14.475764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:14.525977    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.525977    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:14.530095    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:14.559065    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.559065    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:14.562300    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:14.591222    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.591222    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:14.595004    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:14.623409    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.623409    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:14.626892    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:14.654709    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.654709    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:14.658517    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:14.685033    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.685033    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:14.685033    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:14.685033    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:14.729797    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:14.729797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.775571    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:14.775571    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:14.837326    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:14.837326    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:14.868773    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:14.868773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:14.947701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.453450    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:17.476221    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:17.508293    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.508388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:17.512181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:17.543844    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.543844    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:17.547662    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:17.575201    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.575201    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:17.578822    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:17.606210    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.606210    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:17.609909    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:17.635671    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.635671    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:17.639317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:17.668567    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.668567    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:17.671701    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:17.698754    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.698754    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:17.698754    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:17.698835    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:17.746368    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:17.746368    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:17.807375    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:17.807375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:17.838385    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:17.838385    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:17.926603    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.926603    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:17.926648    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.475641    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:20.498334    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:20.527197    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.527197    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:20.530922    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:20.557934    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.557934    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:20.561696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:20.589458    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.589458    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:20.593618    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:20.618953    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.619013    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:20.622779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:20.650087    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.650087    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:20.653349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:20.680898    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.680898    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:20.684841    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:20.711841    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.711841    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:20.711841    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:20.711841    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:20.773325    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:20.773325    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:20.802932    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:20.802932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:20.882468    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:20.882468    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:20.882468    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.924918    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:20.924918    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:23.483925    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:23.503925    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:23.531502    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.531502    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:23.535209    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:23.566493    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.566493    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:23.569915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:23.598869    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.598869    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:23.603128    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:23.629658    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.629658    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:23.633104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:23.659718    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.659718    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:23.663327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:23.693156    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.693156    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:23.696530    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:23.727025    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.727025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:23.727025    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:23.727025    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:23.788970    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:23.788970    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:23.819732    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:23.819732    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:23.903797    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:23.903797    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:23.903797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:23.943716    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:23.943716    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:26.496986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:26.519387    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:26.546439    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.546439    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:26.550311    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:26.579658    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.579658    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:26.583767    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:26.611690    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.611690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:26.616096    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:26.642773    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.642773    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:26.646291    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:26.674086    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.674086    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:26.677423    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:26.705896    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.705896    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:26.709747    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:26.736563    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.736563    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:26.736563    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:26.736563    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:26.797921    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:26.797921    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:26.827915    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:26.827915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:26.912180    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:26.912180    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:26.912180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:26.952784    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:26.952784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.506291    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:29.528153    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:29.558126    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.558126    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:29.562358    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:29.592320    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.592320    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:29.596049    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:29.628556    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.628556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:29.632809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:29.657311    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.657311    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:29.661781    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:29.690232    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.690261    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:29.693735    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:29.722288    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.722288    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:29.725599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:29.757022    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.757022    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:29.757057    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:29.757057    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:29.838684    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:29.838684    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:29.840075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:29.881968    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:29.881968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.937264    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:29.937264    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:30.003954    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:30.003954    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:32.543156    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:32.567379    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:32.595089    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.595089    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:32.599147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:32.627893    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.627962    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:32.631484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:32.658969    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.658969    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:32.662719    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:32.689837    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.689837    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:32.693526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:32.719931    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.719931    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:32.723427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:32.754044    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.754044    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:32.757365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:32.785242    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.785242    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:32.785242    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:32.785242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:32.866344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:32.866344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:32.866344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:32.910000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:32.910000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:32.959713    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:32.959713    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:33.023739    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:33.023739    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:35.563488    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:35.587848    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:35.619497    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.619497    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:35.625107    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:35.653936    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.653936    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:35.657619    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:35.684524    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.684524    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:35.687685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:35.718759    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.718759    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:35.722575    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:35.749655    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.749655    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:35.753297    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:35.780974    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.780974    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:35.784685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:35.810182    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.810182    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:35.810182    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:35.810182    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:35.892605    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:35.892605    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:35.892605    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:35.932890    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:35.932890    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:35.985679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:35.985679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:36.046361    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:36.046361    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:38.583800    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:38.606814    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:38.638211    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.638211    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:38.642266    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:38.669848    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.669848    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:38.673886    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:38.700984    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.700984    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:38.705078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:38.729910    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.729910    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:38.733986    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:38.760705    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.760705    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:38.765121    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:38.799915    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.799915    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:38.804009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:38.833364    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.833364    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:38.833364    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:38.833364    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:38.913728    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:38.914694    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:38.914694    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:38.953812    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:38.953812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:38.999712    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:38.999712    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:39.060789    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:39.060789    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:41.597593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:41.620430    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:41.650082    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.650082    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:41.653991    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:41.681237    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.681306    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:41.684963    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:41.713795    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.713795    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:41.719712    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:41.749037    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.749037    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:41.753070    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:41.779427    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.779427    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:41.783501    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:41.815751    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.815751    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:41.819560    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:41.847881    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.847881    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:41.847881    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:41.847931    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:41.927320    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:41.927320    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:41.927320    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:41.970940    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:41.970940    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:42.027555    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:42.027555    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:42.089451    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:42.089451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.625751    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:44.648990    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:44.676551    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.676585    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:44.679722    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:44.709172    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.709172    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:44.713304    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:44.743046    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.743046    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:44.748526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:44.778521    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.778521    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:44.782734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:44.814603    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.814603    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:44.817683    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:44.845948    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.845948    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:44.849265    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:44.879812    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.879812    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:44.879812    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:44.879812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:44.944127    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:44.944127    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.974113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:44.974113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:45.057102    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:45.057102    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:45.057102    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:45.100139    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:45.100139    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.652183    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:47.675849    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:47.706239    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.706239    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:47.709475    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:47.741233    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.741233    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:47.744861    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:47.774055    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.774055    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:47.777505    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:47.805794    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.805794    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:47.808964    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:47.836392    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.836392    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:47.841779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:47.870715    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.870715    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:47.874288    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:47.901831    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.901831    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:47.901831    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:47.901831    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:47.944346    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:47.944346    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.988778    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:47.988778    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:48.052537    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:48.052537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:48.083339    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:48.083339    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:48.169498    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:50.675888    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:50.695141    1528 kubeadm.go:602] duration metric: took 4m2.9691176s to restartPrimaryControlPlane
	W1212 20:10:50.695255    1528 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:10:50.699541    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:10:51.173784    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:51.196593    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:51.210961    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:51.215040    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:51.228862    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:51.228862    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:51.232787    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:10:51.246730    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:51.251357    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:51.268580    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:10:51.283713    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:51.288367    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:51.308779    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.322868    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:51.327510    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.347243    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:10:51.360015    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:51.365274    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:51.383196    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:51.503494    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:10:51.590365    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:10:51.685851    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:14:52.890657    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:14:52.890657    1528 kubeadm.go:319] 
	I1212 20:14:52.891189    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:14:52.897133    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:14:52.897133    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:14:52.898464    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:14:52.898582    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:14:52.898779    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:14:52.898920    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:14:52.899045    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:14:52.899131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:14:52.899262    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:14:52.899432    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:14:52.899517    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:14:52.899644    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:14:52.899729    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:14:52.899847    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:14:52.900038    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:14:52.900217    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:14:52.900390    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:14:52.900502    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:14:52.900574    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:14:52.900710    1528 kubeadm.go:319] OS: Linux
	I1212 20:14:52.900833    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:14:52.900915    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:14:52.901708    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:14:52.901818    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:14:52.906810    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:14:52.908849    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:14:52.908909    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:14:52.912070    1528 out.go:252]   - Booting up control plane ...
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:14:52.914083    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000441542s
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 
	W1212 20:14:52.915069    1528 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:14:52.921774    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:14:53.390305    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:14:53.408818    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:14:53.413243    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:14:53.425325    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:14:53.425325    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:14:53.430625    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:14:53.442895    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:14:53.446965    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:14:53.464658    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:14:53.478038    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:14:53.482805    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:14:53.499083    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.513919    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:14:53.518566    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.538555    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:14:53.552479    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:14:53.557205    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:14:53.576642    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:14:53.698383    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:14:53.775189    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:14:53.868267    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:18:54.359522    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:18:54.359522    1528 kubeadm.go:319] 
	I1212 20:18:54.359522    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:18:54.362954    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:18:54.363173    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:18:54.363383    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:18:54.363609    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:18:54.364132    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:18:54.364950    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:18:54.365662    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:18:54.365743    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:18:54.365828    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:18:54.365917    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:18:54.366005    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:18:54.366087    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:18:54.366168    1528 kubeadm.go:319] OS: Linux
	I1212 20:18:54.366224    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:18:54.366255    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:18:54.366823    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:18:54.366960    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:18:54.367127    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:18:54.367127    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:18:54.369422    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:18:54.369953    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:18:54.370159    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:18:54.370228    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:18:54.370309    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:18:54.370471    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:18:54.370639    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:18:54.371251    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:18:54.371313    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:18:54.371344    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:18:54.374291    1528 out.go:252]   - Booting up control plane ...
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:18:54.375259    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000961807s
	I1212 20:18:54.375259    1528 kubeadm.go:319] 
	I1212 20:18:54.376246    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:18:54.376246    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:403] duration metric: took 12m6.6943451s to StartCluster
	I1212 20:18:54.376405    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:18:54.380250    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:18:54.441453    1528 cri.go:89] found id: ""
	I1212 20:18:54.441453    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.441453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:18:54.441453    1528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:18:54.446414    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:18:54.508794    1528 cri.go:89] found id: ""
	I1212 20:18:54.508794    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.508794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:18:54.508794    1528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:18:54.513698    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:18:54.553213    1528 cri.go:89] found id: ""
	I1212 20:18:54.553257    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.553257    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:18:54.553295    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:18:54.558235    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:18:54.603262    1528 cri.go:89] found id: ""
	I1212 20:18:54.603262    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.603262    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:18:54.603262    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:18:54.608185    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:18:54.648151    1528 cri.go:89] found id: ""
	I1212 20:18:54.648151    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.648151    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:18:54.648151    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:18:54.652647    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:18:54.693419    1528 cri.go:89] found id: ""
	I1212 20:18:54.693419    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.693419    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:18:54.693419    1528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:18:54.697661    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:18:54.737800    1528 cri.go:89] found id: ""
	I1212 20:18:54.737800    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.737800    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:18:54.737858    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:18:54.737858    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:18:54.790460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:18:54.790460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:18:54.852887    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:18:54.852887    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:18:54.883744    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:18:54.883744    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:18:54.965870    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:18:54.965870    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:18:54.965870    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 20:18:55.009075    1528 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.009075    1528 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.011173    1528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:18:55.016858    1528 out.go:203] 
	W1212 20:18:55.021226    1528 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.021226    1528 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:18:55.021226    1528 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:18:55.024694    1528 out.go:203] 
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:19:50.797625   41803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:19:50.798516   41803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:19:50.802092   41803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:19:50.803426   41803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:19:50.804292   41803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:19:50 up  1:21,  0 user,  load average: 0.46, 0.35, 0.44
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:19:47 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:19:48 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 392.
	Dec 12 20:19:48 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:48 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:48 functional-468800 kubelet[41650]: E1212 20:19:48.447444   41650 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:19:48 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:19:48 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 12 20:19:49 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:49 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:49 functional-468800 kubelet[41662]: E1212 20:19:49.206620   41662 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 12 20:19:49 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:49 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:49 functional-468800 kubelet[41690]: E1212 20:19:49.949403   41690 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:19:49 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:19:50 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 12 20:19:50 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:50 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:19:50 functional-468800 kubelet[41775]: E1212 20:19:50.699406   41775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:19:50 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:19:50 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (581.1687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-468800 apply -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-468800 apply -f testdata\invalidsvc.yaml: exit status 1 (20.1961199s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:55778/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-468800 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 status: exit status 2 (597.9674ms)

                                                
                                                
-- stdout --
	functional-468800
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-468800 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (587.0422ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-468800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 status -o json: exit status 2 (581.9144ms)

                                                
                                                
-- stdout --
	{"Name":"functional-468800","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-468800 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (580.9765ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.2611664s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                                                 ARGS                                                                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service    │ functional-468800 service list                                                                                                                                                                        │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ config     │ functional-468800 config unset cpus                                                                                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ config     │ functional-468800 config get cpus                                                                                                                                                                     │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ cp         │ functional-468800 cp functional-468800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1966122111\001\cp-test.txt │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh echo hello                                                                                                                                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ service    │ functional-468800 service list -o json                                                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ ssh        │ functional-468800 ssh -n functional-468800 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ service    │ functional-468800 service --namespace=default --https --url hello-node                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ ssh        │ functional-468800 ssh cat /etc/hostname                                                                                                                                                               │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ service    │ functional-468800 service hello-node --url --format={{.IP}}                                                                                                                                           │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ cp         │ functional-468800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ tunnel     │ functional-468800 tunnel --alsologtostderr                                                                                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ tunnel     │ functional-468800 tunnel --alsologtostderr                                                                                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ service    │ functional-468800 service hello-node --url                                                                                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ ssh        │ functional-468800 ssh -n functional-468800 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ tunnel     │ functional-468800 tunnel --alsologtostderr                                                                                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ addons     │ functional-468800 addons list                                                                                                                                                                         │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ addons     │ functional-468800 addons list -o json                                                                                                                                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/13396.pem                                                                                                                                               │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/13396.pem                                                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/133962.pem                                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/133962.pem                                                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ docker-env │ functional-468800 docker-env                                                                                                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:06:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:06:38.727985    1528 out.go:360] Setting OutFile to fd 1056 ...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.773098    1528 out.go:374] Setting ErrFile to fd 1212...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.787709    1528 out.go:368] Setting JSON to false
	I1212 20:06:38.790304    1528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4136,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:06:38.790304    1528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:06:38.796304    1528 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:06:38.800290    1528 notify.go:221] Checking for updates...
	I1212 20:06:38.800290    1528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:06:38.802303    1528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:06:38.805306    1528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:06:38.807332    1528 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:06:38.808856    1528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:06:38.812430    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:38.812430    1528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:06:38.929707    1528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:06:38.933677    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.195122    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.177384092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.201119    1528 out.go:179] * Using the docker driver based on existing profile
	I1212 20:06:39.203117    1528 start.go:309] selected driver: docker
	I1212 20:06:39.203117    1528 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.203117    1528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:06:39.209122    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.449342    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.430307853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.528922    1528 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:06:39.529468    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:39.529468    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:39.529468    1528 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.533005    1528 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 20:06:39.535095    1528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 20:06:39.537607    1528 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:06:39.540959    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:39.540959    1528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:06:39.540959    1528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 20:06:39.540959    1528 cache.go:65] Caching tarball of preloaded images
	I1212 20:06:39.541554    1528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 20:06:39.541554    1528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 20:06:39.541554    1528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 20:06:39.619509    1528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:06:39.619509    1528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:06:39.619509    1528 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:06:39.619509    1528 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:06:39.619509    1528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 20:06:39.620041    1528 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:06:39.620041    1528 fix.go:54] fixHost starting: 
	I1212 20:06:39.627157    1528 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 20:06:39.683014    1528 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 20:06:39.683376    1528 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:06:39.686124    1528 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 20:06:39.686124    1528 machine.go:94] provisionDockerMachine start ...
	I1212 20:06:39.689814    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.744908    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.745476    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.745476    1528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:06:39.930965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:39.931078    1528 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 20:06:39.934795    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.989752    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.990452    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.990452    1528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 20:06:40.176756    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:40.180410    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.235554    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.236742    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.236742    1528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:06:40.410965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:40.410965    1528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 20:06:40.410965    1528 ubuntu.go:190] setting up certificates
	I1212 20:06:40.410965    1528 provision.go:84] configureAuth start
	I1212 20:06:40.414835    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:40.468680    1528 provision.go:143] copyHostCerts
	I1212 20:06:40.468680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 20:06:40.468680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 20:06:40.468680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 20:06:40.469680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 20:06:40.469680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 20:06:40.469680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 20:06:40.470682    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 20:06:40.470682    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 20:06:40.470682    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 20:06:40.471679    1528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 20:06:40.521679    1528 provision.go:177] copyRemoteCerts
	I1212 20:06:40.526217    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:06:40.529224    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.578843    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:40.705122    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:06:40.732235    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:06:40.758034    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:06:40.787536    1528 provision.go:87] duration metric: took 376.5012ms to configureAuth
	I1212 20:06:40.787564    1528 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:06:40.788016    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:40.791899    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.847433    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.847433    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.847433    1528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 20:06:41.031514    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 20:06:41.031514    1528 ubuntu.go:71] root file system type: overlay
	I1212 20:06:41.031514    1528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 20:06:41.035525    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.089326    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.090065    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.090155    1528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 20:06:41.283431    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 20:06:41.287473    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.343081    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.343562    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.343562    1528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 20:06:41.525616    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:41.525616    1528 machine.go:97] duration metric: took 1.8394714s to provisionDockerMachine
	I1212 20:06:41.525616    1528 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 20:06:41.525616    1528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:06:41.530519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:06:41.534083    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.586502    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.720007    1528 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:06:41.727943    1528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:06:41.727943    1528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 20:06:41.728602    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 20:06:41.729437    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 20:06:41.733519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 20:06:41.745958    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 20:06:41.772738    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 20:06:41.802626    1528 start.go:296] duration metric: took 277.0071ms for postStartSetup
	I1212 20:06:41.807164    1528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:06:41.809505    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.864695    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.985729    1528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:06:41.994649    1528 fix.go:56] duration metric: took 2.3745808s for fixHost
	I1212 20:06:41.994649    1528 start.go:83] releasing machines lock for "functional-468800", held for 2.3751133s
	I1212 20:06:41.998707    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:42.059230    1528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 20:06:42.063903    1528 ssh_runner.go:195] Run: cat /version.json
	I1212 20:06:42.063903    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.066691    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.116356    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:42.117357    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 20:06:42.228585    1528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 20:06:42.232646    1528 ssh_runner.go:195] Run: systemctl --version
	I1212 20:06:42.247485    1528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:06:42.257236    1528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:06:42.263875    1528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:06:42.279473    1528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:06:42.279473    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.279473    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.283549    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:42.307873    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:06:42.326439    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:06:42.341366    1528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:06:42.345268    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 20:06:42.347179    1528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 20:06:42.347179    1528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 20:06:42.365551    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.385740    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:06:42.407021    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.427172    1528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:06:42.448213    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:06:42.467444    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:06:42.487296    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:06:42.507050    1528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:06:42.524437    1528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:06:42.541928    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:42.701987    1528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:06:42.867618    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.867618    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.872524    1528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 20:06:42.900833    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:42.922770    1528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:06:42.982495    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:43.005292    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:06:43.026719    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:43.052829    1528 ssh_runner.go:195] Run: which cri-dockerd
	I1212 20:06:43.064606    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 20:06:43.079549    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 20:06:43.104999    1528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 20:06:43.240280    1528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 20:06:43.379193    1528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 20:06:43.379358    1528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 20:06:43.405761    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 20:06:43.427392    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:43.565288    1528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 20:06:44.374705    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:06:44.396001    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 20:06:44.418749    1528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 20:06:44.445721    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:44.466663    1528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 20:06:44.598807    1528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 20:06:44.740962    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:44.883493    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 20:06:44.907977    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 20:06:44.931006    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.071046    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 20:06:45.171465    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:45.190143    1528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 20:06:45.194535    1528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 20:06:45.202518    1528 start.go:564] Will wait 60s for crictl version
	I1212 20:06:45.206873    1528 ssh_runner.go:195] Run: which crictl
	I1212 20:06:45.221614    1528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:06:45.263002    1528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 20:06:45.266767    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.308717    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.348580    1528 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 20:06:45.352493    1528 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 20:06:45.482840    1528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 20:06:45.487311    1528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 20:06:45.498523    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:45.552748    1528 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:06:45.554383    1528 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:06:45.554933    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:45.558499    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.589105    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.589105    1528 docker.go:621] Images already preloaded, skipping extraction
	I1212 20:06:45.592742    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.625313    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.625313    1528 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:06:45.625313    1528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 20:06:45.625829    1528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:06:45.629232    1528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 20:06:45.698056    1528 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:06:45.698078    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:45.698133    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:45.698180    1528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:06:45.698180    1528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:06:45.698180    1528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:06:45.702170    1528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:06:45.714209    1528 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:06:45.719390    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:06:45.731628    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 20:06:45.753236    1528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:06:45.772644    1528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1212 20:06:45.798125    1528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:06:45.809796    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.998447    1528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:06:46.682417    1528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 20:06:46.682417    1528 certs.go:195] generating shared ca certs ...
	I1212 20:06:46.682417    1528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:06:46.683216    1528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 20:06:46.683331    1528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 20:06:46.683331    1528 certs.go:257] generating profile certs ...
	I1212 20:06:46.683996    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 20:06:46.685029    1528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 20:06:46.685554    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 20:06:46.686999    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:06:46.715172    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:06:46.745329    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:06:46.775248    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:06:46.804288    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:06:46.833541    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:06:46.858974    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:06:46.883320    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:06:46.912462    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:06:46.937010    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 20:06:46.963968    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 20:06:46.987545    1528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:06:47.014201    1528 ssh_runner.go:195] Run: openssl version
	I1212 20:06:47.028684    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.047532    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:06:47.066889    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.074545    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.078818    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.128719    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:06:47.145523    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.162300    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 20:06:47.179220    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.188551    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.193732    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.241331    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:06:47.258219    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.276085    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 20:06:47.293199    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.300084    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.304026    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.352991    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:06:47.371677    1528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:06:47.384558    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:06:47.433291    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:06:47.480566    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:06:47.530653    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:06:47.582068    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:06:47.630287    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:06:47.673527    1528 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:47.678147    1528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.710789    1528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:06:47.723256    1528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:06:47.723256    1528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:06:47.727283    1528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:06:47.740989    1528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.744500    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:47.805147    1528 kubeconfig.go:125] found "functional-468800" server: "https://127.0.0.1:55778"
	I1212 20:06:47.813022    1528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:06:47.830078    1528 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:49:17.606323144 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:06:45.789464240 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:06:47.830078    1528 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:06:47.833739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.872403    1528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:06:47.898698    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:06:47.911626    1528 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 19:53 /etc/kubernetes/scheduler.conf
	
	I1212 20:06:47.916032    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:06:47.934293    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:06:47.947871    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.952020    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:06:47.971701    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:06:47.986795    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.991166    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:06:48.008021    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:06:48.023761    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:48.029138    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:06:48.047659    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:06:48.063995    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.141323    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.685789    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.933405    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.007626    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.088118    1528 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:06:49.091668    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:49.594772    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.093859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.594422    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.093806    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.593915    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.093893    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.594038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.093417    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.593495    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.594146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.095283    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.594629    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.094166    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.593508    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.093792    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.594191    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.094043    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.593447    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.095461    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.594593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.093887    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.593742    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.093796    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.593635    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.594164    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.094112    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.593477    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.093750    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.595391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.094206    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.595179    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.094740    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.594021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.092923    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.594420    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.093543    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.593353    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.093866    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.594009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.593564    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.594786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.093907    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.595728    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.095070    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.594017    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.094874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.595001    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.094580    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.594646    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.095074    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.594850    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.094067    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.594147    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.094262    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.594277    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.094229    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.593986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.093873    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.593102    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.093881    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.594308    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.093613    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.594040    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.094021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.594274    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.093605    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.594142    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.094736    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.593265    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.094197    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.594872    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.095670    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.093920    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.596679    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.094004    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.594458    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.093715    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.594515    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.094349    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.594711    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.094230    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.594083    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.093810    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.595024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.094786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.594107    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.094421    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.594761    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.095704    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.596396    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.094385    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.593669    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.094137    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.595560    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.094405    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.595146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.094116    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.595721    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.096666    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.595141    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.094696    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.595232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.094232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.595329    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.094121    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.594251    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.094024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.594712    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.094868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.594370    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.093917    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.594667    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:49.093256    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:49.126325    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.126325    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:49.130353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:49.158022    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.158022    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:49.162811    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:49.190525    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.190525    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:49.194310    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:49.220030    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.220030    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:49.223677    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:49.249986    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.249986    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:49.253970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:49.282441    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.282441    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:49.286057    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:49.315225    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.315248    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:49.315306    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:49.315306    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:49.374436    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:49.374436    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:49.404204    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:49.404204    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:49.493575    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:49.493575    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:49.493575    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:49.537752    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:49.537752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.109985    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:52.133820    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:52.164388    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.164388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:52.168109    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:52.195605    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.195605    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:52.199164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:52.229188    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.229188    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:52.232745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:52.256990    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.256990    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:52.261539    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:52.290862    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.290862    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:52.294555    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:52.324957    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.324957    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:52.330284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:52.359197    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.359197    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:52.359197    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:52.359197    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:52.386524    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:52.386524    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:52.470690    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:52.470690    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:52.470690    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:52.511513    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:52.511513    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.560676    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:52.560676    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.127058    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:55.150663    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:55.181456    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.181456    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:55.184641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:55.217269    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.217269    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:55.220911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:55.250346    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.250346    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:55.254082    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:55.285676    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.285706    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:55.288968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:55.315854    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.315854    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:55.319386    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:55.348937    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.348937    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:55.352894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:55.380789    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.380853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:55.380853    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:55.380883    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:55.463944    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:55.463944    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:55.463944    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:55.507780    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:55.507780    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:55.561906    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:55.561906    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.623372    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:55.623372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.160009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:58.184039    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:58.215109    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.215109    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:58.218681    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:58.247778    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.247778    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:58.251301    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:58.278710    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.278710    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:58.282296    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:58.308953    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.308953    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:58.312174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:58.339973    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.340049    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:58.343731    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:58.374943    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.374943    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:58.378660    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:58.405372    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.405372    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:58.405372    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:58.405372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:58.453718    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:58.453718    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:58.514502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:58.514502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.544394    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:58.544394    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:58.623232    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:58.623232    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:58.623232    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.169113    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:01.192583    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:01.222434    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.222434    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:01.225873    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:01.253020    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.253020    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:01.257395    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:01.286407    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.286407    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:01.290442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:01.317408    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.317408    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:01.321138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:01.348820    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.348820    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:01.352926    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:01.383541    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.383541    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:01.387373    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:01.415400    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.415431    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:01.415431    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:01.415466    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:01.481183    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:01.481183    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:01.512132    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:01.512132    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:01.598560    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:01.598601    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:01.598601    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.641848    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:01.641848    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.202764    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:04.225393    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:04.257048    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.257048    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:04.261463    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:04.289329    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.289329    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:04.295911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:04.324136    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.324205    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:04.329272    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:04.355941    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.355941    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:04.359744    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:04.389386    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.389461    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:04.393063    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:04.421465    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.421465    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:04.425377    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:04.454159    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.454159    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:04.454185    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:04.454221    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:04.499238    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:04.499238    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.546668    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:04.546668    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:04.614181    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:04.614181    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:04.646155    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:04.646155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:04.746527    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.252038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:07.276838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:07.307770    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.307770    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:07.311473    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:07.338086    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.338086    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:07.343809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:07.373687    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.373687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:07.377399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:07.406083    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.406083    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:07.409835    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:07.437651    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.437651    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:07.441428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:07.468369    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.468369    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:07.472164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:07.503047    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.503047    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:07.503047    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:07.503811    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:07.531856    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:07.531856    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:07.618451    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.618451    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:07.618451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:07.661072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:07.661072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:07.708185    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:07.708185    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.277741    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:10.301882    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:10.334646    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.334646    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:10.338176    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:10.369543    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.369543    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:10.372853    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:10.405159    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.405159    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:10.408623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:10.436491    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.436491    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:10.440653    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:10.471674    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.471674    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:10.475616    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:10.503923    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.503923    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:10.507960    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:10.532755    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.532755    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:10.532755    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:10.532755    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.596502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:10.596502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:10.627352    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:10.627352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:10.716582    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:10.716582    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:10.716582    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:10.758177    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:10.758177    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.312261    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:13.336629    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:13.366321    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.366321    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:13.370440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:13.398643    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.398643    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:13.402381    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:13.432456    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.432481    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:13.436213    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:13.464635    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.464711    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:13.468308    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:13.495284    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.495284    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:13.499271    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:13.528325    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.528325    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:13.531787    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:13.562227    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.562227    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:13.562227    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:13.562227    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:13.663593    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:13.663593    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:13.663593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:13.704702    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:13.704702    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.753473    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:13.753473    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:13.816534    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:13.816534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.353541    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:16.376390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:16.407214    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.407214    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:16.410992    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:16.441225    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.441225    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:16.444710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:16.474803    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.474803    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:16.478736    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:16.507490    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.507490    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:16.510890    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:16.542100    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.542196    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:16.546032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:16.575799    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.575799    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:16.579959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:16.607409    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.607409    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:16.607409    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:16.607409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.635159    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:16.635159    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:16.716319    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:16.716319    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:16.716319    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:16.759176    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:16.759176    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:16.808150    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:16.808180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.374586    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:19.397466    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:19.428699    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.428699    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:19.432104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:19.459357    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.459357    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:19.463506    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:19.492817    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.492862    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:19.496262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:19.524604    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.524633    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:19.528245    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:19.554030    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.554030    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:19.557659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:19.585449    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.585449    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:19.589270    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:19.617715    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.617715    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:19.617715    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:19.617715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:19.665679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:19.665679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.731378    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:19.731378    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:19.760660    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:19.760660    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:19.846488    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:19.846488    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:19.846534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.396054    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:22.420446    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:22.451208    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.451246    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:22.455255    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:22.482900    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.482900    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:22.486411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:22.515383    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.515383    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:22.518824    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:22.550034    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.550034    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:22.553623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:22.581020    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.581020    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:22.585628    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:22.612869    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.612869    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:22.616928    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:22.644472    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.644472    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:22.644472    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:22.644472    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:22.708075    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:22.708075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:22.738243    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:22.738270    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:22.821664    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:22.821664    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:22.821664    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.864165    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:22.864165    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.420933    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:25.445913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:25.482750    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.482780    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:25.486866    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:25.513327    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.513327    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:25.516888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:25.544296    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.544296    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:25.547411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:25.577831    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.577831    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:25.581764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:25.611577    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.611577    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:25.614994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:25.643683    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.643683    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:25.647543    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:25.673764    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.673764    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:25.673764    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:25.673764    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:25.756845    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:25.756845    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:25.756845    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:25.796355    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:25.796355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.848330    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:25.848330    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:25.908271    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:25.908271    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:28.444198    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:28.466730    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:28.495218    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.496317    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:28.499838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:28.526946    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.526946    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:28.531098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:28.558957    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.558957    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:28.563084    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:28.591401    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.591401    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:28.594622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:28.621536    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.621536    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:28.625599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:28.652819    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.652819    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:28.655938    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:28.684007    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.684007    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:28.684049    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:28.684049    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:28.766993    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:28.766993    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:28.766993    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:28.808427    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:28.808427    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:28.854005    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:28.854005    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:28.915072    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:28.915072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.448340    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:31.482817    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:31.516888    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.516948    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:31.520762    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:31.548829    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.548829    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:31.552634    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:31.580202    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.580202    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:31.583832    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:31.612644    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.612644    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:31.616408    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:31.641662    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.641662    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:31.645105    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:31.674858    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.674858    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:31.678481    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:31.708742    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.708742    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:31.708742    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:31.708742    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.737537    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:31.737537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:31.815915    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:31.815915    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:31.815915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:31.855387    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:31.855387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:31.902882    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:31.902882    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.468874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:34.492525    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:34.524158    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.524158    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:34.528390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:34.555356    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.555356    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:34.558734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:34.589102    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.589171    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:34.592795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:34.621829    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.621829    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:34.625204    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:34.653376    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.653376    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:34.657009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:34.683738    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.683738    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:34.686742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:34.714674    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.714674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:34.714674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:34.714674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.779026    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:34.779026    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:34.808978    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:34.808978    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:34.892063    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:34.892063    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:34.892063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:34.931531    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:34.931531    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:37.485139    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:37.507669    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:37.539156    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.539156    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:37.543011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:37.573040    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.573040    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:37.576524    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:37.606845    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.606845    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:37.610640    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:37.637362    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.637362    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:37.640345    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:37.667170    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.667203    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:37.670535    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:37.699517    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.699517    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:37.703317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:37.728898    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.728898    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:37.728898    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:37.728898    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:37.794369    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:37.794369    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:37.824287    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:37.824287    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:37.909344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:37.909344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:37.909344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:37.954162    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:37.954162    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.506487    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:40.531085    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:40.562228    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.562228    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:40.566239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:40.592782    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.592782    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:40.597032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:40.623771    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.623771    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:40.627181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:40.653272    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.653272    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:40.657007    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:40.684331    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.684331    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:40.687951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:40.717873    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.718396    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:40.722742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:40.750968    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.750968    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:40.750968    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:40.750968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:40.780652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:40.780652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.862566    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.862566    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:40.862566    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:40.901731    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.901731    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.950141    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.950141    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.517065    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:43.542117    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:43.570769    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.570769    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:43.574614    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:43.606209    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.606209    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:43.610144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:43.636742    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.636742    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:43.640713    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:43.671147    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.671166    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:43.675284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:43.702707    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.702707    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.709331    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:43.739560    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.739560    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:43.743495    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:43.773460    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.773460    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.773460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.773460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.839426    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.839426    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.869067    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.869067    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.956418    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.956418    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:43.956418    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:43.999225    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.999225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.559969    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:46.583306    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:46.616304    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.616304    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:46.620185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:46.649980    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.649980    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.653901    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:46.679706    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.679706    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.683349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:46.709377    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.709377    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:46.713435    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:46.743714    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.743714    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.747353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:46.774831    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.774831    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:46.778444    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:46.803849    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.803849    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.803849    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:46.803849    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:46.846976    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.898873    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.898873    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.960800    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.960800    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.992131    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.992131    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:47.078211    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.584391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:49.609888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:49.644530    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.644530    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:49.648078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:49.676237    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.676237    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.680633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:49.711496    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.711496    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.714503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:49.741598    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.741598    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:49.746023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:49.774073    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.774073    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.780499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:49.807422    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.807422    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:49.811492    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:49.837105    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.837105    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.837105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.837105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.919888    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.919888    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:49.919888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:49.961375    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.961375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:50.029040    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:50.029040    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:50.091715    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:50.091715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:52.626760    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:52.650138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:52.682125    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.682125    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:52.685499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:52.716677    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.716677    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.720251    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:52.750215    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.750215    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.753203    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:52.783410    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.783410    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:52.786745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:52.816028    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.816028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.819028    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:52.847808    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.847808    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:52.851676    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:52.880388    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.880388    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.880388    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:52.880388    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:52.927060    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.927060    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.980540    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.980540    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.040013    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.040013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.068682    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.068682    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:53.153542    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:55.659454    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:55.682885    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:55.711696    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.711696    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:55.718399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:55.746229    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.746229    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.750441    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:55.780178    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.780210    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.784012    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:55.811985    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.811985    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:55.816792    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:55.847996    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.847996    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:55.851745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:55.883521    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.883521    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:55.886915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:55.914853    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.914853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:55.914853    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:55.914853    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:55.960920    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:55.960920    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.026011    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.026011    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.053113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.053113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.136578    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:56.136578    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:56.136578    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:58.683199    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:58.705404    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:58.735584    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.735584    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:58.739795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:58.770569    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.770569    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:58.774526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:58.804440    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.804440    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:58.808498    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:58.836009    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.836009    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:58.840208    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:58.869192    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.869192    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:58.872945    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:58.902237    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.902237    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:58.905993    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:58.933450    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.933617    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:58.933617    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:58.933617    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:58.976315    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:58.976391    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:59.038199    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.038199    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.068976    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.068976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.160516    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.160516    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:59.160516    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:01.709859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:01.733860    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:01.762957    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.762957    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:01.766889    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:01.793351    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.793351    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:01.797156    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:01.823801    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.823801    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:01.827545    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:01.858811    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.858811    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:01.862667    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:01.888526    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.888601    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:01.892330    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:01.921800    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.921834    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:01.925710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:01.954630    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.954630    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:01.954630    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:01.954630    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.019929    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.019929    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.050304    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.050304    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.137016    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.137016    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:02.137016    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:02.181380    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.181380    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:04.738393    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:04.761261    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:04.788560    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.788594    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:04.792550    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:04.822339    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.822339    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:04.826135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:04.854461    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.854531    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:04.858147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:04.886243    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.886243    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:04.890144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:04.918123    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.918123    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:04.922152    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:04.949493    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.949557    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:04.953111    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:04.980390    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.980390    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:04.980390    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:04.980390    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.043888    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.043888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.075474    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.075474    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.156773    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.156773    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:05.156773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:05.198847    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.198847    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:07.752600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.774442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:07.801273    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.801315    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:07.804806    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:07.833315    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.833315    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:07.837119    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:07.866393    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.866417    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:07.869980    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:07.898480    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.898480    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:07.902426    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:07.929231    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.929231    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:07.932443    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:07.962786    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.962786    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:07.966343    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:07.993681    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.993681    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:07.993681    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:07.993681    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.075996    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.075996    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:08.075996    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:08.115751    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:08.115751    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:08.167959    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:08.167959    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:08.229990    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:08.229990    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:10.765802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:10.787970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:10.817520    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.817520    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:10.821188    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:10.850905    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.850905    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:10.854741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:10.882098    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.882098    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:10.885759    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:10.915908    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.915931    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:10.919484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:10.947704    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.947704    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:10.951840    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:10.979998    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.979998    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:10.983440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:11.012620    1528 logs.go:282] 0 containers: []
	W1212 20:09:11.012620    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:11.012620    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:11.012620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:11.075910    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:11.075910    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.105013    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:11.105013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:11.184242    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:11.184242    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:11.184242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:11.228072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:11.228072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:13.782352    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.806071    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:13.835380    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.835380    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:13.839913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:13.866644    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.866644    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:13.870648    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:13.900617    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.900687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:13.904431    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:13.928026    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.928026    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:13.931830    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:13.961813    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.961813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:13.965790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:13.993658    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.993658    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:13.997303    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:14.025708    1528 logs.go:282] 0 containers: []
	W1212 20:09:14.025708    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:14.025708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:14.025708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:14.106478    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:14.106478    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:14.106478    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:14.148128    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:14.148128    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:14.203808    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:14.203885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:14.267083    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:14.267083    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:16.803844    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:16.828076    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:16.857370    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.857370    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:16.861602    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:16.888928    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.888928    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.892594    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:16.918950    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.918950    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.922184    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:16.949697    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.949697    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:16.953615    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:16.980582    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.980582    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.984239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:17.011537    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.011537    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:17.015236    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:17.044025    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.044025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.044059    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.044059    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.108593    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.108593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.140984    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.140984    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:17.223600    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:17.223647    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:17.223647    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:17.265808    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.265808    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:19.827665    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:19.848754    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:19.880440    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.880440    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:19.884631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:19.911688    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.911688    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:19.915503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:19.942894    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.942894    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:19.946623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:19.974622    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.974622    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:19.978983    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:20.005201    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.005201    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:20.009244    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:20.040298    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.040298    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:20.043935    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:20.073267    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.073267    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:20.073267    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:20.073267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:20.139351    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:20.139351    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:20.170692    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:20.170692    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:20.255758    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:20.255758    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:20.255758    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:20.296082    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:20.296082    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:22.852656    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:22.877113    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:22.907531    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.907601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:22.911006    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:22.938103    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.938103    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:22.941741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:22.969757    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.969757    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:22.973641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:23.003718    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.003718    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:23.007427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:23.034105    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.034105    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:23.038551    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:23.068440    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.068440    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:23.072250    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:23.099797    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.099797    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:23.099797    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:23.099797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:23.127441    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:23.127441    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:23.213420    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:23.213420    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:23.213420    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:23.258155    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:23.258155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:23.304413    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:23.304413    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:25.871188    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:25.894216    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:25.924994    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.924994    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:25.928893    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:25.956143    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.956143    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:25.961174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:25.988898    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.988898    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:25.993364    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:26.021169    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.021233    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:26.024829    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:26.051922    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.051922    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:26.055062    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:26.082542    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.082542    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:26.086788    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:26.117355    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.117355    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:26.117355    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:26.117355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:26.180352    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:26.180352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:26.211105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:26.211105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:26.296971    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:26.296971    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:26.296971    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:26.338711    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:26.338711    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:28.896860    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:28.920643    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:28.950389    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.950389    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:28.955391    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:28.982117    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.982117    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:28.986142    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:29.015662    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.015662    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:29.019455    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:29.049660    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.049660    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:29.053631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:29.081889    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.081889    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:29.086411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:29.114138    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.114138    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:29.119659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:29.150078    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.150078    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:29.150078    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:29.150078    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:29.214085    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:29.214085    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:29.248111    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:29.248111    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:29.331531    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:29.331531    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:29.331573    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:29.371475    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:29.371475    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:31.925581    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:31.948416    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:31.979393    1528 logs.go:282] 0 containers: []
	W1212 20:09:31.979436    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:31.982941    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:32.012671    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.012745    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:32.016490    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:32.044571    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.044571    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:32.049959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:32.077737    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.077737    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:32.082023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:32.112680    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.112680    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:32.116732    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:32.144079    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.144079    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:32.147365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:32.175674    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.175674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:32.175674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:32.175674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:32.238433    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:32.238433    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:32.268680    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:32.268680    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:32.350924    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:32.351446    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:32.351446    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:32.393409    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:32.393409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:34.949675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:34.974371    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:35.003673    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.003673    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:35.007894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:35.036794    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.036794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:35.040718    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:35.068827    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.068827    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:35.073552    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:35.101505    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.101505    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:35.105374    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:35.132637    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.132637    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:35.135977    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:35.164108    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.164108    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:35.168327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:35.196237    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.196237    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:35.196237    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:35.196237    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:35.225096    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:35.225096    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:35.310720    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:35.310720    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:35.310720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:35.352640    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:35.352640    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:35.405163    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:35.405684    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:37.970126    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:37.993740    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:38.021567    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.021567    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:38.025733    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:38.054259    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.054259    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:38.058230    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:38.091609    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.091609    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:38.094726    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:38.121402    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.121402    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:38.124780    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:38.156230    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.156230    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:38.159968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:38.187111    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.187111    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:38.191000    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:38.219114    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.219114    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:38.219114    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:38.219163    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:38.267592    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:38.267642    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:38.332291    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:38.332291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:38.362654    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:38.362654    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:38.450249    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:38.450249    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:38.450249    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.000122    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:41.025061    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:41.056453    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.056453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:41.060356    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:41.090046    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.090046    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:41.096769    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:41.124375    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.124375    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:41.128276    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:41.155835    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.155835    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:41.159800    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:41.188748    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.188748    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:41.193110    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:41.220152    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.220152    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:41.224010    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:41.252532    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.252532    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:41.252532    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:41.252532    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:41.316983    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:41.316983    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:41.347558    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:41.347558    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:41.428225    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:41.428225    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:41.428225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.470919    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:41.470919    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:44.030446    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:44.055047    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:44.084459    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.084459    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:44.088206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:44.117052    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.117052    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:44.120537    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:44.147556    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.147556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:44.152098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:44.180075    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.180075    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:44.183790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:44.210767    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.210767    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:44.214367    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:44.240217    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.240217    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:44.244696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:44.273318    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.273318    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:44.273318    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:44.273371    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:44.339517    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:44.339517    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:44.369771    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:44.369771    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:44.450064    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:44.450064    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:44.450064    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:44.493504    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:44.493504    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:47.062950    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:47.087994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:47.118381    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.118409    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:47.121556    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:47.150429    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.150429    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:47.154790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:47.182604    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.182604    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:47.186262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:47.213354    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.213354    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:47.217174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:47.246442    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.246442    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:47.251292    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:47.280336    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.280336    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:47.283865    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:47.311245    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.311323    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:47.311323    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:47.311323    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:47.374063    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:47.374063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:47.404257    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:47.404257    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:47.493784    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:47.493784    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:47.493784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:47.546267    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:47.546267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:50.104321    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:50.126581    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:50.155564    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.155564    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:50.160428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:50.189268    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.189268    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:50.192916    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:50.218955    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.218955    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:50.222686    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:50.249342    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.249342    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:50.253397    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:50.283028    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.283028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:50.286951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:50.325979    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.325979    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:50.329622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:50.358362    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.358362    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:50.358362    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:50.358362    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:50.422488    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:50.422488    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:50.452652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:50.452652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:50.550551    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:50.550602    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:50.550602    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:50.590552    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:50.590552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.158722    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:53.182259    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:53.211903    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.211903    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:53.215402    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:53.243958    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.243958    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:53.247562    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:53.275751    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.275751    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:53.279763    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:53.306836    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.306836    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:53.310872    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:53.337813    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.337813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:53.341633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:53.371291    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.371291    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:53.374974    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:53.401726    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.401726    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:53.401726    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:53.401726    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:53.484480    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:53.484480    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:53.484480    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:53.548050    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:53.548050    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.599287    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:53.599439    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:53.660624    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:53.660624    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.196823    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:56.221135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:56.250407    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.250407    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:56.254016    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:56.285901    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.285901    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:56.290067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:56.318341    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.318341    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:56.321789    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:56.352739    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.352739    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:56.356470    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:56.384106    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.384106    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:56.388211    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:56.415890    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.415890    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:56.420087    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:56.447932    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.447932    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:56.447932    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:56.447932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.477708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:56.477708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:56.588387    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:56.588387    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:56.588387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:56.628140    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:56.629024    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:56.673720    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:56.673720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.242052    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:59.264739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:59.293601    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.293601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:59.297772    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:59.324701    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.324701    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:59.328642    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:59.358373    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.358373    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:59.362425    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:59.392638    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.392638    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:59.396206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:59.423777    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.423777    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:59.427998    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:59.455368    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.455368    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:59.460647    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:59.488029    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.488029    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:59.488029    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:59.488029    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.548806    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:59.548806    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:59.580620    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:59.580620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:59.670291    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:59.670291    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:59.670291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:59.715000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:59.715000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:02.271675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:02.295613    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:02.328792    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.328792    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:02.332483    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:02.364136    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.364136    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:02.368415    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:02.396018    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.396018    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:02.399987    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:02.426946    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.426946    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:02.430641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:02.457307    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.457307    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:02.461639    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:02.490776    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.490776    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:02.495011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:02.535030    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.535030    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:02.535030    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:02.535030    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:02.598020    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:02.598020    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:02.627885    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:02.627885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:02.704890    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:02.704939    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:02.704939    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:02.743781    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:02.743781    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.296529    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:05.320338    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:05.350975    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.350975    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:05.354341    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:05.384954    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.384954    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:05.389226    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:05.416593    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.416663    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:05.420370    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:05.448275    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.448306    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:05.451950    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:05.489214    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.489214    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:05.492826    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:05.542815    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.542815    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:05.546994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:05.577967    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.577967    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:05.577967    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:05.577967    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:05.666752    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:05.666752    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:05.666752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:05.710699    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:05.710699    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.761552    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:05.761552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:05.824698    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:05.824698    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.358868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:08.384185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:08.414077    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.414077    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:08.417802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:08.449585    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.449585    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:08.453707    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:08.481690    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.481690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:08.485802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:08.526849    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.526849    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:08.530588    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:08.561211    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.561211    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:08.565127    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:08.592694    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.592781    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:08.596577    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:08.625262    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.625262    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:08.625262    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:08.625335    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:08.685169    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:08.685169    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.715897    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:08.715897    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:08.803701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:08.803701    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:08.803701    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:08.843054    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:08.843054    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:11.399600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:11.423207    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:11.452824    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.452824    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:11.456632    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:11.485718    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.485718    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:11.489975    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:11.516373    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.516442    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:11.520086    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:11.550008    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.550008    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:11.553479    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:11.582422    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.582422    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:11.586067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:11.614204    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.614204    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:11.617891    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:11.647117    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.647117    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:11.647117    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:11.647117    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:11.708885    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:11.708885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:11.738490    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:11.738490    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:11.827046    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:11.827046    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:11.827107    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:11.866493    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:11.866493    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.418219    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:14.441326    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:14.471617    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.471617    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:14.475764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:14.525977    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.525977    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:14.530095    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:14.559065    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.559065    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:14.562300    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:14.591222    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.591222    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:14.595004    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:14.623409    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.623409    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:14.626892    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:14.654709    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.654709    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:14.658517    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:14.685033    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.685033    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:14.685033    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:14.685033    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:14.729797    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:14.729797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.775571    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:14.775571    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:14.837326    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:14.837326    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:14.868773    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:14.868773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:14.947701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.453450    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:17.476221    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:17.508293    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.508388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:17.512181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:17.543844    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.543844    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:17.547662    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:17.575201    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.575201    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:17.578822    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:17.606210    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.606210    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:17.609909    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:17.635671    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.635671    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:17.639317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:17.668567    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.668567    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:17.671701    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:17.698754    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.698754    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:17.698754    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:17.698835    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:17.746368    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:17.746368    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:17.807375    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:17.807375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:17.838385    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:17.838385    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:17.926603    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.926603    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:17.926648    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.475641    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:20.498334    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:20.527197    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.527197    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:20.530922    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:20.557934    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.557934    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:20.561696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:20.589458    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.589458    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:20.593618    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:20.618953    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.619013    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:20.622779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:20.650087    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.650087    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:20.653349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:20.680898    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.680898    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:20.684841    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:20.711841    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.711841    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:20.711841    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:20.711841    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:20.773325    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:20.773325    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:20.802932    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:20.802932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:20.882468    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:20.882468    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:20.882468    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.924918    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:20.924918    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:23.483925    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:23.503925    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:23.531502    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.531502    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:23.535209    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:23.566493    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.566493    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:23.569915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:23.598869    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.598869    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:23.603128    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:23.629658    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.629658    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:23.633104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:23.659718    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.659718    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:23.663327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:23.693156    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.693156    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:23.696530    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:23.727025    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.727025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:23.727025    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:23.727025    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:23.788970    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:23.788970    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:23.819732    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:23.819732    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:23.903797    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:23.903797    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:23.903797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:23.943716    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:23.943716    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:26.496986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:26.519387    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:26.546439    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.546439    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:26.550311    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:26.579658    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.579658    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:26.583767    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:26.611690    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.611690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:26.616096    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:26.642773    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.642773    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:26.646291    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:26.674086    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.674086    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:26.677423    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:26.705896    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.705896    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:26.709747    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:26.736563    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.736563    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:26.736563    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:26.736563    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:26.797921    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:26.797921    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:26.827915    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:26.827915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:26.912180    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:26.912180    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:26.912180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:26.952784    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:26.952784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.506291    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:29.528153    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:29.558126    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.558126    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:29.562358    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:29.592320    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.592320    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:29.596049    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:29.628556    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.628556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:29.632809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:29.657311    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.657311    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:29.661781    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:29.690232    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.690261    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:29.693735    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:29.722288    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.722288    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:29.725599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:29.757022    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.757022    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:29.757057    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:29.757057    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:29.838684    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:29.838684    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:29.840075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:29.881968    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:29.881968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.937264    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:29.937264    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:30.003954    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:30.003954    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:32.543156    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:32.567379    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:32.595089    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.595089    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:32.599147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:32.627893    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.627962    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:32.631484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:32.658969    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.658969    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:32.662719    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:32.689837    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.689837    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:32.693526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:32.719931    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.719931    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:32.723427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:32.754044    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.754044    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:32.757365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:32.785242    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.785242    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:32.785242    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:32.785242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:32.866344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:32.866344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:32.866344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:32.910000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:32.910000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:32.959713    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:32.959713    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:33.023739    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:33.023739    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:35.563488    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:35.587848    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:35.619497    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.619497    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:35.625107    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:35.653936    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.653936    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:35.657619    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:35.684524    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.684524    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:35.687685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:35.718759    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.718759    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:35.722575    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:35.749655    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.749655    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:35.753297    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:35.780974    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.780974    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:35.784685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:35.810182    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.810182    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:35.810182    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:35.810182    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:35.892605    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:35.892605    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:35.892605    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:35.932890    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:35.932890    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:35.985679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:35.985679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:36.046361    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:36.046361    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:38.583800    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:38.606814    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:38.638211    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.638211    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:38.642266    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:38.669848    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.669848    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:38.673886    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:38.700984    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.700984    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:38.705078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:38.729910    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.729910    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:38.733986    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:38.760705    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.760705    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:38.765121    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:38.799915    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.799915    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:38.804009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:38.833364    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.833364    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:38.833364    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:38.833364    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:38.913728    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:38.914694    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:38.914694    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:38.953812    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:38.953812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:38.999712    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:38.999712    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:39.060789    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:39.060789    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:41.597593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:41.620430    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:41.650082    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.650082    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:41.653991    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:41.681237    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.681306    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:41.684963    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:41.713795    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.713795    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:41.719712    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:41.749037    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.749037    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:41.753070    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:41.779427    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.779427    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:41.783501    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:41.815751    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.815751    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:41.819560    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:41.847881    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.847881    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:41.847881    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:41.847931    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:41.927320    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:41.927320    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:41.927320    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:41.970940    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:41.970940    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:42.027555    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:42.027555    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:42.089451    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:42.089451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.625751    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:44.648990    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:44.676551    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.676585    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:44.679722    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:44.709172    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.709172    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:44.713304    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:44.743046    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.743046    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:44.748526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:44.778521    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.778521    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:44.782734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:44.814603    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.814603    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:44.817683    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:44.845948    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.845948    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:44.849265    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:44.879812    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.879812    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:44.879812    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:44.879812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:44.944127    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:44.944127    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.974113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:44.974113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:45.057102    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:45.057102    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:45.057102    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:45.100139    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:45.100139    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.652183    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:47.675849    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:47.706239    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.706239    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:47.709475    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:47.741233    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.741233    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:47.744861    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:47.774055    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.774055    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:47.777505    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:47.805794    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.805794    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:47.808964    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:47.836392    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.836392    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:47.841779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:47.870715    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.870715    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:47.874288    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:47.901831    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.901831    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:47.901831    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:47.901831    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:47.944346    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:47.944346    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.988778    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:47.988778    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:48.052537    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:48.052537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:48.083339    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:48.083339    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:48.169498    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:50.675888    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:50.695141    1528 kubeadm.go:602] duration metric: took 4m2.9691176s to restartPrimaryControlPlane
	W1212 20:10:50.695255    1528 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:10:50.699541    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:10:51.173784    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:51.196593    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:51.210961    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:51.215040    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:51.228862    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:51.228862    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:51.232787    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:10:51.246730    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:51.251357    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:51.268580    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:10:51.283713    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:51.288367    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:51.308779    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.322868    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:51.327510    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.347243    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:10:51.360015    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:51.365274    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:51.383196    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:51.503494    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:10:51.590365    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:10:51.685851    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:14:52.890657    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:14:52.890657    1528 kubeadm.go:319] 
	I1212 20:14:52.891189    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:14:52.897133    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:14:52.897133    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:14:52.898464    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:14:52.898582    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:14:52.898779    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:14:52.898920    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:14:52.899045    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:14:52.899131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:14:52.899262    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:14:52.899432    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:14:52.899517    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:14:52.899644    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:14:52.899729    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:14:52.899847    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:14:52.900038    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:14:52.900217    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:14:52.900390    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:14:52.900502    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:14:52.900574    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:14:52.900710    1528 kubeadm.go:319] OS: Linux
	I1212 20:14:52.900833    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:14:52.900915    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:14:52.901708    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:14:52.901818    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:14:52.906810    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:14:52.908849    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:14:52.908909    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:14:52.912070    1528 out.go:252]   - Booting up control plane ...
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:14:52.914083    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000441542s
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 
	W1212 20:14:52.915069    1528 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:14:52.921774    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:14:53.390305    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:14:53.408818    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:14:53.413243    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:14:53.425325    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:14:53.425325    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:14:53.430625    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:14:53.442895    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:14:53.446965    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:14:53.464658    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:14:53.478038    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:14:53.482805    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:14:53.499083    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.513919    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:14:53.518566    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.538555    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:14:53.552479    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:14:53.557205    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:14:53.576642    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:14:53.698383    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:14:53.775189    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:14:53.868267    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:18:54.359522    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:18:54.359522    1528 kubeadm.go:319] 
	I1212 20:18:54.359522    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:18:54.362954    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:18:54.363173    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:18:54.363383    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:18:54.363609    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:18:54.364132    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:18:54.364950    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:18:54.365662    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:18:54.365743    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:18:54.365828    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:18:54.365917    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:18:54.366005    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:18:54.366087    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:18:54.366168    1528 kubeadm.go:319] OS: Linux
	I1212 20:18:54.366224    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:18:54.366255    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:18:54.366823    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:18:54.366960    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:18:54.367127    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:18:54.367127    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:18:54.369422    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:18:54.369953    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:18:54.370159    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:18:54.370228    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:18:54.370309    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:18:54.370471    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:18:54.370639    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:18:54.371251    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:18:54.371313    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:18:54.371344    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:18:54.374291    1528 out.go:252]   - Booting up control plane ...
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:18:54.375259    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000961807s
	I1212 20:18:54.375259    1528 kubeadm.go:319] 
	I1212 20:18:54.376246    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:18:54.376246    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:403] duration metric: took 12m6.6943451s to StartCluster
	I1212 20:18:54.376405    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:18:54.380250    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:18:54.441453    1528 cri.go:89] found id: ""
	I1212 20:18:54.441453    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.441453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:18:54.441453    1528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:18:54.446414    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:18:54.508794    1528 cri.go:89] found id: ""
	I1212 20:18:54.508794    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.508794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:18:54.508794    1528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:18:54.513698    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:18:54.553213    1528 cri.go:89] found id: ""
	I1212 20:18:54.553257    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.553257    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:18:54.553295    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:18:54.558235    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:18:54.603262    1528 cri.go:89] found id: ""
	I1212 20:18:54.603262    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.603262    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:18:54.603262    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:18:54.608185    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:18:54.648151    1528 cri.go:89] found id: ""
	I1212 20:18:54.648151    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.648151    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:18:54.648151    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:18:54.652647    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:18:54.693419    1528 cri.go:89] found id: ""
	I1212 20:18:54.693419    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.693419    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:18:54.693419    1528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:18:54.697661    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:18:54.737800    1528 cri.go:89] found id: ""
	I1212 20:18:54.737800    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.737800    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:18:54.737858    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:18:54.737858    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:18:54.790460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:18:54.790460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:18:54.852887    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:18:54.852887    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:18:54.883744    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:18:54.883744    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:18:54.965870    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:18:54.965870    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:18:54.965870    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 20:18:55.009075    1528 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.009075    1528 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.011173    1528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:18:55.016858    1528 out.go:203] 
	W1212 20:18:55.021226    1528 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.021226    1528 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:18:55.021226    1528 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:18:55.024694    1528 out.go:203] 
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:20:51.511545   43595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:20:51.512521   43595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:20:51.513557   43595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:20:51.514591   43595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:20:51.515564   43595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:20:51 up  1:22,  0 user,  load average: 0.41, 0.35, 0.43
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:20:47 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:20:48 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 12 20:20:48 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:48 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:48 functional-468800 kubelet[43407]: E1212 20:20:48.694159   43407 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:20:48 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:20:48 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:20:49 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 473.
	Dec 12 20:20:49 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:49 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:49 functional-468800 kubelet[43433]: E1212 20:20:49.465218   43433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:20:49 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:20:49 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 12 20:20:50 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:50 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:50 functional-468800 kubelet[43462]: E1212 20:20:50.215966   43462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 12 20:20:50 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:50 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:20:50 functional-468800 kubelet[43489]: E1212 20:20:50.950253   43489 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:20:50 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (588.7064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (124.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-468800 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-468800 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (93.7084ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:55778/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-468800 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-468800 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-468800 describe po hello-node-connect: exit status 1 (50.3239042s)

                                                
                                                
** stderr ** 
	E1212 20:20:29.241962   14100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:20:39.320853   14100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:20:49.364114   14100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:20:59.398783   14100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:09.441535   14100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-468800 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-468800 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-468800 logs -l app=hello-node-connect: exit status 1 (40.3033561s)

                                                
                                                
** stderr ** 
	E1212 20:21:19.582247    9316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:29.667411    9316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:39.705367    9316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:49.749676    9316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-468800 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-468800 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-468800 describe svc hello-node-connect: exit status 1 (29.3648897s)

                                                
                                                
** stderr ** 
	E1212 20:21:59.884450     736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:09.970372     736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-468800 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (586.4146ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.314185s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service    │ functional-468800 service hello-node --url                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ ssh        │ functional-468800 ssh -n functional-468800 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ tunnel     │ functional-468800 tunnel --alsologtostderr                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ addons     │ functional-468800 addons list                                                                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ addons     │ functional-468800 addons list -o json                                                                                                                     │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/13396.pem                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/13396.pem                                                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/133962.pem                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/133962.pem                                                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ docker-env │ functional-468800 docker-env                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/test/nested/copy/13396/hosts                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image save kicbase/echo-server:functional-468800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image rm kicbase/echo-server:functional-468800 --alsologtostderr                                                                        │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image save --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:06:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:06:38.727985    1528 out.go:360] Setting OutFile to fd 1056 ...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.773098    1528 out.go:374] Setting ErrFile to fd 1212...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.787709    1528 out.go:368] Setting JSON to false
	I1212 20:06:38.790304    1528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4136,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:06:38.790304    1528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:06:38.796304    1528 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:06:38.800290    1528 notify.go:221] Checking for updates...
	I1212 20:06:38.800290    1528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:06:38.802303    1528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:06:38.805306    1528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:06:38.807332    1528 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:06:38.808856    1528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:06:38.812430    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:38.812430    1528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:06:38.929707    1528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:06:38.933677    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.195122    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.177384092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.201119    1528 out.go:179] * Using the docker driver based on existing profile
	I1212 20:06:39.203117    1528 start.go:309] selected driver: docker
	I1212 20:06:39.203117    1528 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.203117    1528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:06:39.209122    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.449342    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.430307853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.528922    1528 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:06:39.529468    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:39.529468    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:39.529468    1528 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.533005    1528 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 20:06:39.535095    1528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 20:06:39.537607    1528 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:06:39.540959    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:39.540959    1528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:06:39.540959    1528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 20:06:39.540959    1528 cache.go:65] Caching tarball of preloaded images
	I1212 20:06:39.541554    1528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 20:06:39.541554    1528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 20:06:39.541554    1528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 20:06:39.619509    1528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:06:39.619509    1528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:06:39.619509    1528 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:06:39.619509    1528 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:06:39.619509    1528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 20:06:39.620041    1528 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:06:39.620041    1528 fix.go:54] fixHost starting: 
	I1212 20:06:39.627157    1528 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 20:06:39.683014    1528 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 20:06:39.683376    1528 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:06:39.686124    1528 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 20:06:39.686124    1528 machine.go:94] provisionDockerMachine start ...
	I1212 20:06:39.689814    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.744908    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.745476    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.745476    1528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:06:39.930965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:39.931078    1528 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 20:06:39.934795    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.989752    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.990452    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.990452    1528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 20:06:40.176756    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:40.180410    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.235554    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.236742    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.236742    1528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:06:40.410965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:40.410965    1528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 20:06:40.410965    1528 ubuntu.go:190] setting up certificates
	I1212 20:06:40.410965    1528 provision.go:84] configureAuth start
	I1212 20:06:40.414835    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:40.468680    1528 provision.go:143] copyHostCerts
	I1212 20:06:40.468680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 20:06:40.468680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 20:06:40.468680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 20:06:40.469680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 20:06:40.469680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 20:06:40.469680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 20:06:40.470682    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 20:06:40.470682    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 20:06:40.470682    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 20:06:40.471679    1528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 20:06:40.521679    1528 provision.go:177] copyRemoteCerts
	I1212 20:06:40.526217    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:06:40.529224    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.578843    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:40.705122    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:06:40.732235    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:06:40.758034    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:06:40.787536    1528 provision.go:87] duration metric: took 376.5012ms to configureAuth
	I1212 20:06:40.787564    1528 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:06:40.788016    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:40.791899    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.847433    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.847433    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.847433    1528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 20:06:41.031514    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 20:06:41.031514    1528 ubuntu.go:71] root file system type: overlay
	I1212 20:06:41.031514    1528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 20:06:41.035525    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.089326    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.090065    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.090155    1528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 20:06:41.283431    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 20:06:41.287473    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.343081    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.343562    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.343562    1528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 20:06:41.525616    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:41.525616    1528 machine.go:97] duration metric: took 1.8394714s to provisionDockerMachine
	I1212 20:06:41.525616    1528 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 20:06:41.525616    1528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:06:41.530519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:06:41.534083    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.586502    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.720007    1528 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:06:41.727943    1528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:06:41.727943    1528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 20:06:41.728602    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 20:06:41.729437    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 20:06:41.733519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 20:06:41.745958    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 20:06:41.772738    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 20:06:41.802626    1528 start.go:296] duration metric: took 277.0071ms for postStartSetup
	I1212 20:06:41.807164    1528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:06:41.809505    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.864695    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.985729    1528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:06:41.994649    1528 fix.go:56] duration metric: took 2.3745808s for fixHost
	I1212 20:06:41.994649    1528 start.go:83] releasing machines lock for "functional-468800", held for 2.3751133s
	I1212 20:06:41.998707    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:42.059230    1528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 20:06:42.063903    1528 ssh_runner.go:195] Run: cat /version.json
	I1212 20:06:42.063903    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.066691    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.116356    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:42.117357    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 20:06:42.228585    1528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 20:06:42.232646    1528 ssh_runner.go:195] Run: systemctl --version
	I1212 20:06:42.247485    1528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:06:42.257236    1528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:06:42.263875    1528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:06:42.279473    1528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:06:42.279473    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.279473    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.283549    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:42.307873    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:06:42.326439    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:06:42.341366    1528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:06:42.345268    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 20:06:42.347179    1528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 20:06:42.347179    1528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 20:06:42.365551    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.385740    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:06:42.407021    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.427172    1528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:06:42.448213    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:06:42.467444    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:06:42.487296    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:06:42.507050    1528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:06:42.524437    1528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:06:42.541928    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:42.701987    1528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:06:42.867618    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.867618    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.872524    1528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 20:06:42.900833    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:42.922770    1528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:06:42.982495    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:43.005292    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:06:43.026719    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:43.052829    1528 ssh_runner.go:195] Run: which cri-dockerd
	I1212 20:06:43.064606    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 20:06:43.079549    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 20:06:43.104999    1528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 20:06:43.240280    1528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 20:06:43.379193    1528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 20:06:43.379358    1528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 20:06:43.405761    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 20:06:43.427392    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:43.565288    1528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 20:06:44.374705    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:06:44.396001    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 20:06:44.418749    1528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 20:06:44.445721    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:44.466663    1528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 20:06:44.598807    1528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 20:06:44.740962    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:44.883493    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 20:06:44.907977    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 20:06:44.931006    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.071046    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 20:06:45.171465    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:45.190143    1528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 20:06:45.194535    1528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 20:06:45.202518    1528 start.go:564] Will wait 60s for crictl version
	I1212 20:06:45.206873    1528 ssh_runner.go:195] Run: which crictl
	I1212 20:06:45.221614    1528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:06:45.263002    1528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 20:06:45.266767    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.308717    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.348580    1528 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 20:06:45.352493    1528 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 20:06:45.482840    1528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 20:06:45.487311    1528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 20:06:45.498523    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:45.552748    1528 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:06:45.554383    1528 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:06:45.554933    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:45.558499    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.589105    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.589105    1528 docker.go:621] Images already preloaded, skipping extraction
	I1212 20:06:45.592742    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.625313    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.625313    1528 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:06:45.625313    1528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 20:06:45.625829    1528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:06:45.629232    1528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 20:06:45.698056    1528 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:06:45.698078    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:45.698133    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:45.698180    1528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:06:45.698180    1528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:06:45.698180    1528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:06:45.702170    1528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:06:45.714209    1528 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:06:45.719390    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:06:45.731628    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 20:06:45.753236    1528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:06:45.772644    1528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1212 20:06:45.798125    1528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:06:45.809796    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.998447    1528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:06:46.682417    1528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 20:06:46.682417    1528 certs.go:195] generating shared ca certs ...
	I1212 20:06:46.682417    1528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:06:46.683216    1528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 20:06:46.683331    1528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 20:06:46.683331    1528 certs.go:257] generating profile certs ...
	I1212 20:06:46.683996    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 20:06:46.685029    1528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 20:06:46.685554    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 20:06:46.686999    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:06:46.715172    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:06:46.745329    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:06:46.775248    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:06:46.804288    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:06:46.833541    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:06:46.858974    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:06:46.883320    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:06:46.912462    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:06:46.937010    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 20:06:46.963968    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 20:06:46.987545    1528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:06:47.014201    1528 ssh_runner.go:195] Run: openssl version
	I1212 20:06:47.028684    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.047532    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:06:47.066889    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.074545    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.078818    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.128719    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:06:47.145523    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.162300    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 20:06:47.179220    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.188551    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.193732    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.241331    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:06:47.258219    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.276085    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 20:06:47.293199    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.300084    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.304026    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.352991    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:06:47.371677    1528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:06:47.384558    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:06:47.433291    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:06:47.480566    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:06:47.530653    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:06:47.582068    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:06:47.630287    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:06:47.673527    1528 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:47.678147    1528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.710789    1528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:06:47.723256    1528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:06:47.723256    1528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:06:47.727283    1528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:06:47.740989    1528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.744500    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:47.805147    1528 kubeconfig.go:125] found "functional-468800" server: "https://127.0.0.1:55778"
	I1212 20:06:47.813022    1528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:06:47.830078    1528 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:49:17.606323144 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:06:45.789464240 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:06:47.830078    1528 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:06:47.833739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.872403    1528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:06:47.898698    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:06:47.911626    1528 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 19:53 /etc/kubernetes/scheduler.conf
	
	I1212 20:06:47.916032    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:06:47.934293    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:06:47.947871    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.952020    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:06:47.971701    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:06:47.986795    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.991166    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:06:48.008021    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:06:48.023761    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:48.029138    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:06:48.047659    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:06:48.063995    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.141323    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.685789    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.933405    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.007626    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.088118    1528 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:06:49.091668    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:49.594772    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.093859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.594422    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.093806    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.593915    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.093893    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.594038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.093417    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.593495    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.594146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.095283    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.594629    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.094166    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.593508    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.093792    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.594191    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.094043    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.593447    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.095461    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.594593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.093887    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.593742    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.093796    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.593635    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.594164    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.094112    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.593477    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.093750    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.595391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.094206    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.595179    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.094740    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.594021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.092923    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.594420    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.093543    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.593353    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.093866    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.594009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.593564    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.594786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.093907    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.595728    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.095070    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.594017    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.094874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.595001    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.094580    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.594646    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.095074    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.594850    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.094067    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.594147    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.094262    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.594277    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.094229    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.593986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.093873    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.593102    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.093881    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.594308    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.093613    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.594040    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.094021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.594274    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.093605    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.594142    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.094736    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.593265    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.094197    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.594872    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.095670    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.093920    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.596679    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.094004    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.594458    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.093715    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.594515    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.094349    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.594711    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.094230    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.594083    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.093810    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.595024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.094786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.594107    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.094421    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.594761    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.095704    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.596396    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.094385    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.593669    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.094137    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.595560    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.094405    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.595146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.094116    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.595721    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.096666    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.595141    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.094696    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.595232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.094232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.595329    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.094121    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.594251    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.094024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.594712    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.094868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.594370    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.093917    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.594667    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:49.093256    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:49.126325    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.126325    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:49.130353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:49.158022    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.158022    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:49.162811    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:49.190525    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.190525    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:49.194310    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:49.220030    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.220030    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:49.223677    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:49.249986    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.249986    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:49.253970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:49.282441    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.282441    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:49.286057    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:49.315225    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.315248    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:49.315306    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:49.315306    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:49.374436    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:49.374436    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:49.404204    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:49.404204    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:49.493575    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:49.493575    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:49.493575    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:49.537752    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:49.537752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.109985    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:52.133820    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:52.164388    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.164388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:52.168109    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:52.195605    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.195605    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:52.199164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:52.229188    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.229188    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:52.232745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:52.256990    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.256990    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:52.261539    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:52.290862    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.290862    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:52.294555    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:52.324957    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.324957    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:52.330284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:52.359197    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.359197    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:52.359197    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:52.359197    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:52.386524    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:52.386524    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:52.470690    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:52.470690    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:52.470690    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:52.511513    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:52.511513    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.560676    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:52.560676    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.127058    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:55.150663    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:55.181456    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.181456    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:55.184641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:55.217269    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.217269    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:55.220911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:55.250346    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.250346    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:55.254082    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:55.285676    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.285706    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:55.288968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:55.315854    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.315854    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:55.319386    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:55.348937    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.348937    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:55.352894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:55.380789    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.380853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:55.380853    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:55.380883    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:55.463944    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:55.463944    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:55.463944    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:55.507780    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:55.507780    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:55.561906    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:55.561906    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.623372    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:55.623372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.160009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:58.184039    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:58.215109    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.215109    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:58.218681    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:58.247778    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.247778    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:58.251301    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:58.278710    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.278710    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:58.282296    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:58.308953    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.308953    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:58.312174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:58.339973    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.340049    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:58.343731    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:58.374943    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.374943    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:58.378660    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:58.405372    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.405372    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:58.405372    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:58.405372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:58.453718    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:58.453718    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:58.514502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:58.514502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.544394    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:58.544394    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:58.623232    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:58.623232    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:58.623232    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.169113    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:01.192583    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:01.222434    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.222434    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:01.225873    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:01.253020    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.253020    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:01.257395    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:01.286407    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.286407    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:01.290442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:01.317408    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.317408    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:01.321138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:01.348820    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.348820    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:01.352926    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:01.383541    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.383541    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:01.387373    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:01.415400    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.415431    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:01.415431    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:01.415466    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:01.481183    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:01.481183    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:01.512132    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:01.512132    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:01.598560    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:01.598601    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:01.598601    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.641848    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:01.641848    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.202764    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:04.225393    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:04.257048    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.257048    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:04.261463    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:04.289329    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.289329    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:04.295911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:04.324136    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.324205    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:04.329272    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:04.355941    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.355941    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:04.359744    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:04.389386    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.389461    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:04.393063    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:04.421465    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.421465    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:04.425377    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:04.454159    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.454159    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:04.454185    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:04.454221    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:04.499238    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:04.499238    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.546668    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:04.546668    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:04.614181    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:04.614181    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:04.646155    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:04.646155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:04.746527    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.252038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:07.276838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:07.307770    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.307770    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:07.311473    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:07.338086    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.338086    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:07.343809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:07.373687    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.373687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:07.377399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:07.406083    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.406083    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:07.409835    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:07.437651    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.437651    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:07.441428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:07.468369    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.468369    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:07.472164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:07.503047    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.503047    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:07.503047    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:07.503811    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:07.531856    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:07.531856    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:07.618451    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.618451    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:07.618451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:07.661072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:07.661072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:07.708185    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:07.708185    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.277741    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:10.301882    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:10.334646    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.334646    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:10.338176    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:10.369543    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.369543    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:10.372853    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:10.405159    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.405159    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:10.408623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:10.436491    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.436491    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:10.440653    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:10.471674    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.471674    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:10.475616    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:10.503923    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.503923    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:10.507960    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:10.532755    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.532755    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:10.532755    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:10.532755    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.596502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:10.596502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:10.627352    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:10.627352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:10.716582    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:10.716582    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:10.716582    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:10.758177    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:10.758177    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.312261    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:13.336629    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:13.366321    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.366321    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:13.370440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:13.398643    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.398643    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:13.402381    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:13.432456    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.432481    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:13.436213    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:13.464635    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.464711    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:13.468308    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:13.495284    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.495284    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:13.499271    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:13.528325    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.528325    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:13.531787    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:13.562227    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.562227    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:13.562227    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:13.562227    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:13.663593    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:13.663593    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:13.663593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:13.704702    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:13.704702    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.753473    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:13.753473    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:13.816534    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:13.816534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.353541    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:16.376390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:16.407214    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.407214    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:16.410992    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:16.441225    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.441225    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:16.444710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:16.474803    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.474803    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:16.478736    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:16.507490    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.507490    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:16.510890    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:16.542100    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.542196    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:16.546032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:16.575799    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.575799    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:16.579959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:16.607409    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.607409    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:16.607409    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:16.607409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.635159    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:16.635159    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:16.716319    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:16.716319    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:16.716319    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:16.759176    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:16.759176    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:16.808150    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:16.808180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.374586    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:19.397466    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:19.428699    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.428699    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:19.432104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:19.459357    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.459357    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:19.463506    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:19.492817    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.492862    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:19.496262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:19.524604    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.524633    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:19.528245    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:19.554030    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.554030    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:19.557659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:19.585449    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.585449    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:19.589270    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:19.617715    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.617715    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:19.617715    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:19.617715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:19.665679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:19.665679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.731378    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:19.731378    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:19.760660    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:19.760660    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:19.846488    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:19.846488    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:19.846534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.396054    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:22.420446    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:22.451208    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.451246    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:22.455255    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:22.482900    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.482900    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:22.486411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:22.515383    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.515383    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:22.518824    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:22.550034    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.550034    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:22.553623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:22.581020    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.581020    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:22.585628    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:22.612869    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.612869    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:22.616928    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:22.644472    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.644472    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:22.644472    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:22.644472    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:22.708075    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:22.708075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:22.738243    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:22.738270    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:22.821664    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:22.821664    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:22.821664    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.864165    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:22.864165    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.420933    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:25.445913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:25.482750    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.482780    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:25.486866    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:25.513327    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.513327    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:25.516888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:25.544296    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.544296    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:25.547411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:25.577831    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.577831    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:25.581764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:25.611577    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.611577    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:25.614994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:25.643683    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.643683    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:25.647543    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:25.673764    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.673764    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:25.673764    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:25.673764    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:25.756845    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:25.756845    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:25.756845    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:25.796355    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:25.796355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.848330    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:25.848330    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:25.908271    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:25.908271    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:28.444198    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:28.466730    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:28.495218    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.496317    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:28.499838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:28.526946    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.526946    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:28.531098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:28.558957    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.558957    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:28.563084    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:28.591401    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.591401    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:28.594622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:28.621536    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.621536    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:28.625599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:28.652819    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.652819    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:28.655938    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:28.684007    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.684007    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:28.684049    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:28.684049    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:28.766993    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:28.766993    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:28.766993    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:28.808427    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:28.808427    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:28.854005    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:28.854005    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:28.915072    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:28.915072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.448340    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:31.482817    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:31.516888    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.516948    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:31.520762    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:31.548829    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.548829    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:31.552634    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:31.580202    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.580202    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:31.583832    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:31.612644    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.612644    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:31.616408    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:31.641662    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.641662    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:31.645105    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:31.674858    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.674858    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:31.678481    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:31.708742    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.708742    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:31.708742    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:31.708742    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.737537    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:31.737537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:31.815915    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:31.815915    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:31.815915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:31.855387    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:31.855387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:31.902882    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:31.902882    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.468874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:34.492525    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:34.524158    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.524158    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:34.528390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:34.555356    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.555356    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:34.558734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:34.589102    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.589171    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:34.592795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:34.621829    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.621829    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:34.625204    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:34.653376    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.653376    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:34.657009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:34.683738    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.683738    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:34.686742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:34.714674    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.714674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:34.714674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:34.714674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.779026    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:34.779026    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:34.808978    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:34.808978    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:34.892063    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:34.892063    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:34.892063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:34.931531    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:34.931531    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:37.485139    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:37.507669    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:37.539156    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.539156    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:37.543011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:37.573040    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.573040    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:37.576524    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:37.606845    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.606845    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:37.610640    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:37.637362    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.637362    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:37.640345    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:37.667170    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.667203    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:37.670535    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:37.699517    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.699517    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:37.703317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:37.728898    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.728898    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:37.728898    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:37.728898    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:37.794369    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:37.794369    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:37.824287    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:37.824287    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:37.909344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:37.909344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:37.909344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:37.954162    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:37.954162    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.506487    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:40.531085    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:40.562228    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.562228    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:40.566239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:40.592782    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.592782    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:40.597032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:40.623771    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.623771    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:40.627181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:40.653272    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.653272    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:40.657007    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:40.684331    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.684331    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:40.687951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:40.717873    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.718396    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:40.722742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:40.750968    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.750968    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:40.750968    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:40.750968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:40.780652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:40.780652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.862566    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.862566    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:40.862566    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:40.901731    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.901731    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.950141    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.950141    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.517065    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:43.542117    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:43.570769    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.570769    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:43.574614    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:43.606209    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.606209    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:43.610144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:43.636742    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.636742    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:43.640713    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:43.671147    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.671166    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:43.675284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:43.702707    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.702707    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.709331    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:43.739560    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.739560    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:43.743495    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:43.773460    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.773460    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.773460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.773460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.839426    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.839426    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.869067    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.869067    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.956418    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.956418    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:43.956418    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:43.999225    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.999225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.559969    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:46.583306    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:46.616304    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.616304    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:46.620185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:46.649980    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.649980    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.653901    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:46.679706    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.679706    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.683349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:46.709377    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.709377    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:46.713435    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:46.743714    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.743714    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.747353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:46.774831    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.774831    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:46.778444    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:46.803849    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.803849    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.803849    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:46.803849    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:46.846976    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.898873    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.898873    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.960800    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.960800    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.992131    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.992131    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:47.078211    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.584391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:49.609888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:49.644530    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.644530    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:49.648078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:49.676237    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.676237    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.680633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:49.711496    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.711496    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.714503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:49.741598    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.741598    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:49.746023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:49.774073    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.774073    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.780499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:49.807422    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.807422    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:49.811492    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:49.837105    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.837105    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.837105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.837105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.919888    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.919888    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:49.919888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:49.961375    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.961375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:50.029040    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:50.029040    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:50.091715    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:50.091715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:52.626760    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:52.650138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:52.682125    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.682125    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:52.685499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:52.716677    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.716677    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.720251    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:52.750215    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.750215    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.753203    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:52.783410    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.783410    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:52.786745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:52.816028    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.816028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.819028    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:52.847808    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.847808    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:52.851676    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:52.880388    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.880388    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.880388    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:52.880388    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:52.927060    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.927060    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.980540    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.980540    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.040013    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.040013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.068682    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.068682    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:53.153542    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:55.659454    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:55.682885    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:55.711696    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.711696    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:55.718399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:55.746229    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.746229    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.750441    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:55.780178    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.780210    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.784012    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:55.811985    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.811985    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:55.816792    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:55.847996    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.847996    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:55.851745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:55.883521    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.883521    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:55.886915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:55.914853    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.914853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:55.914853    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:55.914853    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:55.960920    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:55.960920    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.026011    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.026011    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.053113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.053113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.136578    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:56.136578    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:56.136578    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:58.683199    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:58.705404    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:58.735584    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.735584    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:58.739795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:58.770569    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.770569    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:58.774526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:58.804440    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.804440    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:58.808498    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:58.836009    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.836009    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:58.840208    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:58.869192    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.869192    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:58.872945    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:58.902237    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.902237    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:58.905993    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:58.933450    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.933617    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:58.933617    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:58.933617    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:58.976315    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:58.976391    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:59.038199    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.038199    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.068976    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.068976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.160516    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.160516    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:59.160516    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:01.709859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:01.733860    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:01.762957    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.762957    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:01.766889    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:01.793351    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.793351    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:01.797156    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:01.823801    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.823801    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:01.827545    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:01.858811    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.858811    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:01.862667    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:01.888526    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.888601    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:01.892330    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:01.921800    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.921834    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:01.925710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:01.954630    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.954630    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:01.954630    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:01.954630    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.019929    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.019929    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.050304    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.050304    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.137016    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.137016    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:02.137016    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:02.181380    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.181380    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:04.738393    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:04.761261    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:04.788560    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.788594    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:04.792550    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:04.822339    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.822339    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:04.826135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:04.854461    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.854531    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:04.858147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:04.886243    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.886243    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:04.890144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:04.918123    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.918123    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:04.922152    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:04.949493    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.949557    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:04.953111    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:04.980390    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.980390    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:04.980390    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:04.980390    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.043888    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.043888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.075474    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.075474    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.156773    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.156773    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:05.156773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:05.198847    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.198847    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:07.752600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.774442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:07.801273    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.801315    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:07.804806    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:07.833315    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.833315    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:07.837119    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:07.866393    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.866417    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:07.869980    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:07.898480    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.898480    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:07.902426    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:07.929231    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.929231    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:07.932443    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:07.962786    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.962786    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:07.966343    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:07.993681    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.993681    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:07.993681    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:07.993681    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.075996    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.075996    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:08.075996    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:08.115751    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:08.115751    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:08.167959    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:08.167959    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:08.229990    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:08.229990    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:10.765802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:10.787970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:10.817520    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.817520    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:10.821188    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:10.850905    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.850905    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:10.854741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:10.882098    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.882098    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:10.885759    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:10.915908    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.915931    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:10.919484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:10.947704    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.947704    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:10.951840    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:10.979998    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.979998    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:10.983440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:11.012620    1528 logs.go:282] 0 containers: []
	W1212 20:09:11.012620    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:11.012620    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:11.012620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:11.075910    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:11.075910    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.105013    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:11.105013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:11.184242    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:11.184242    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:11.184242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:11.228072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:11.228072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:13.782352    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.806071    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:13.835380    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.835380    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:13.839913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:13.866644    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.866644    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:13.870648    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:13.900617    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.900687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:13.904431    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:13.928026    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.928026    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:13.931830    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:13.961813    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.961813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:13.965790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:13.993658    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.993658    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:13.997303    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:14.025708    1528 logs.go:282] 0 containers: []
	W1212 20:09:14.025708    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:14.025708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:14.025708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:14.106478    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:14.106478    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:14.106478    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:14.148128    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:14.148128    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:14.203808    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:14.203885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:14.267083    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:14.267083    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:16.803844    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:16.828076    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:16.857370    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.857370    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:16.861602    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:16.888928    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.888928    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.892594    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:16.918950    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.918950    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.922184    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:16.949697    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.949697    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:16.953615    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:16.980582    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.980582    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.984239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:17.011537    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.011537    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:17.015236    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:17.044025    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.044025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.044059    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.044059    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.108593    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.108593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.140984    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.140984    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:17.223600    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:17.223647    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:17.223647    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:17.265808    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.265808    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:19.827665    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:19.848754    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:19.880440    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.880440    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:19.884631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:19.911688    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.911688    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:19.915503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:19.942894    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.942894    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:19.946623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:19.974622    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.974622    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:19.978983    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:20.005201    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.005201    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:20.009244    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:20.040298    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.040298    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:20.043935    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:20.073267    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.073267    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:20.073267    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:20.073267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:20.139351    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:20.139351    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:20.170692    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:20.170692    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:20.255758    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:20.255758    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:20.255758    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:20.296082    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:20.296082    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:22.852656    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:22.877113    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:22.907531    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.907601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:22.911006    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:22.938103    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.938103    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:22.941741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:22.969757    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.969757    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:22.973641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:23.003718    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.003718    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:23.007427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:23.034105    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.034105    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:23.038551    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:23.068440    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.068440    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:23.072250    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:23.099797    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.099797    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:23.099797    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:23.099797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:23.127441    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:23.127441    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:23.213420    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:23.213420    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:23.213420    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:23.258155    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:23.258155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:23.304413    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:23.304413    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:25.871188    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:25.894216    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:25.924994    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.924994    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:25.928893    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:25.956143    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.956143    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:25.961174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:25.988898    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.988898    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:25.993364    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:26.021169    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.021233    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:26.024829    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:26.051922    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.051922    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:26.055062    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:26.082542    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.082542    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:26.086788    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:26.117355    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.117355    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:26.117355    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:26.117355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:26.180352    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:26.180352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:26.211105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:26.211105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:26.296971    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:26.296971    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:26.296971    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:26.338711    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:26.338711    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:28.896860    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:28.920643    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:28.950389    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.950389    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:28.955391    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:28.982117    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.982117    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:28.986142    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:29.015662    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.015662    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:29.019455    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:29.049660    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.049660    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:29.053631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:29.081889    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.081889    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:29.086411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:29.114138    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.114138    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:29.119659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:29.150078    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.150078    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:29.150078    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:29.150078    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:29.214085    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:29.214085    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:29.248111    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:29.248111    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:29.331531    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:29.331531    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:29.331573    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:29.371475    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:29.371475    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:31.925581    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:31.948416    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:31.979393    1528 logs.go:282] 0 containers: []
	W1212 20:09:31.979436    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:31.982941    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:32.012671    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.012745    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:32.016490    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:32.044571    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.044571    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:32.049959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:32.077737    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.077737    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:32.082023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:32.112680    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.112680    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:32.116732    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:32.144079    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.144079    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:32.147365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:32.175674    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.175674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:32.175674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:32.175674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:32.238433    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:32.238433    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:32.268680    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:32.268680    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:32.350924    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:32.351446    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:32.351446    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:32.393409    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:32.393409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:34.949675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:34.974371    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:35.003673    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.003673    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:35.007894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:35.036794    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.036794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:35.040718    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:35.068827    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.068827    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:35.073552    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:35.101505    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.101505    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:35.105374    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:35.132637    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.132637    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:35.135977    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:35.164108    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.164108    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:35.168327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:35.196237    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.196237    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:35.196237    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:35.196237    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:35.225096    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:35.225096    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:35.310720    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:35.310720    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:35.310720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:35.352640    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:35.352640    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:35.405163    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:35.405684    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:37.970126    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:37.993740    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:38.021567    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.021567    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:38.025733    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:38.054259    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.054259    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:38.058230    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:38.091609    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.091609    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:38.094726    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:38.121402    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.121402    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:38.124780    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:38.156230    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.156230    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:38.159968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:38.187111    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.187111    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:38.191000    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:38.219114    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.219114    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:38.219114    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:38.219163    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:38.267592    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:38.267642    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:38.332291    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:38.332291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:38.362654    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:38.362654    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:38.450249    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:38.450249    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:38.450249    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.000122    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:41.025061    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:41.056453    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.056453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:41.060356    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:41.090046    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.090046    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:41.096769    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:41.124375    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.124375    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:41.128276    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:41.155835    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.155835    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:41.159800    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:41.188748    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.188748    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:41.193110    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:41.220152    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.220152    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:41.224010    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:41.252532    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.252532    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:41.252532    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:41.252532    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:41.316983    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:41.316983    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:41.347558    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:41.347558    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:41.428225    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:41.428225    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:41.428225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.470919    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:41.470919    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:44.030446    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:44.055047    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:44.084459    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.084459    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:44.088206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:44.117052    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.117052    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:44.120537    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:44.147556    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.147556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:44.152098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:44.180075    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.180075    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:44.183790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:44.210767    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.210767    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:44.214367    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:44.240217    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.240217    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:44.244696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:44.273318    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.273318    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:44.273318    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:44.273371    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:44.339517    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:44.339517    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:44.369771    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:44.369771    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:44.450064    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:44.450064    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:44.450064    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:44.493504    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:44.493504    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:47.062950    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:47.087994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:47.118381    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.118409    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:47.121556    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:47.150429    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.150429    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:47.154790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:47.182604    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.182604    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:47.186262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:47.213354    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.213354    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:47.217174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:47.246442    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.246442    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:47.251292    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:47.280336    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.280336    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:47.283865    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:47.311245    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.311323    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:47.311323    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:47.311323    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:47.374063    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:47.374063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:47.404257    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:47.404257    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:47.493784    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:47.493784    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:47.493784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:47.546267    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:47.546267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:50.104321    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:50.126581    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:50.155564    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.155564    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:50.160428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:50.189268    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.189268    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:50.192916    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:50.218955    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.218955    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:50.222686    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:50.249342    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.249342    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:50.253397    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:50.283028    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.283028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:50.286951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:50.325979    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.325979    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:50.329622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:50.358362    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.358362    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:50.358362    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:50.358362    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:50.422488    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:50.422488    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:50.452652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:50.452652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:50.550551    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:50.550602    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:50.550602    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:50.590552    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:50.590552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.158722    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:53.182259    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:53.211903    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.211903    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:53.215402    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:53.243958    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.243958    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:53.247562    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:53.275751    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.275751    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:53.279763    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:53.306836    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.306836    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:53.310872    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:53.337813    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.337813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:53.341633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:53.371291    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.371291    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:53.374974    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:53.401726    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.401726    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:53.401726    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:53.401726    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:53.484480    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:53.484480    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:53.484480    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:53.548050    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:53.548050    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.599287    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:53.599439    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:53.660624    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:53.660624    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.196823    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:56.221135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:56.250407    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.250407    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:56.254016    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:56.285901    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.285901    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:56.290067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:56.318341    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.318341    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:56.321789    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:56.352739    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.352739    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:56.356470    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:56.384106    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.384106    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:56.388211    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:56.415890    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.415890    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:56.420087    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:56.447932    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.447932    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:56.447932    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:56.447932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.477708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:56.477708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:56.588387    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:56.588387    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:56.588387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:56.628140    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:56.629024    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:56.673720    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:56.673720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.242052    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:59.264739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:59.293601    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.293601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:59.297772    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:59.324701    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.324701    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:59.328642    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:59.358373    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.358373    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:59.362425    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:59.392638    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.392638    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:59.396206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:59.423777    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.423777    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:59.427998    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:59.455368    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.455368    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:59.460647    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:59.488029    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.488029    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:59.488029    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:59.488029    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.548806    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:59.548806    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:59.580620    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:59.580620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:59.670291    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:59.670291    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:59.670291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:59.715000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:59.715000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:02.271675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:02.295613    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:02.328792    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.328792    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:02.332483    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:02.364136    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.364136    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:02.368415    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:02.396018    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.396018    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:02.399987    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:02.426946    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.426946    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:02.430641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:02.457307    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.457307    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:02.461639    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:02.490776    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.490776    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:02.495011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:02.535030    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.535030    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:02.535030    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:02.535030    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:02.598020    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:02.598020    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:02.627885    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:02.627885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:02.704890    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:02.704939    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:02.704939    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:02.743781    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:02.743781    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.296529    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:05.320338    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:05.350975    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.350975    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:05.354341    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:05.384954    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.384954    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:05.389226    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:05.416593    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.416663    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:05.420370    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:05.448275    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.448306    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:05.451950    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:05.489214    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.489214    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:05.492826    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:05.542815    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.542815    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:05.546994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:05.577967    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.577967    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:05.577967    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:05.577967    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:05.666752    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:05.666752    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:05.666752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:05.710699    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:05.710699    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.761552    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:05.761552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:05.824698    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:05.824698    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.358868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:08.384185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:08.414077    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.414077    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:08.417802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:08.449585    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.449585    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:08.453707    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:08.481690    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.481690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:08.485802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:08.526849    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.526849    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:08.530588    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:08.561211    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.561211    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:08.565127    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:08.592694    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.592781    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:08.596577    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:08.625262    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.625262    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:08.625262    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:08.625335    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:08.685169    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:08.685169    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.715897    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:08.715897    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:08.803701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:08.803701    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:08.803701    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:08.843054    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:08.843054    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:11.399600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:11.423207    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:11.452824    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.452824    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:11.456632    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:11.485718    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.485718    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:11.489975    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:11.516373    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.516442    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:11.520086    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:11.550008    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.550008    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:11.553479    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:11.582422    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.582422    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:11.586067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:11.614204    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.614204    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:11.617891    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:11.647117    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.647117    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:11.647117    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:11.647117    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:11.708885    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:11.708885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:11.738490    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:11.738490    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:11.827046    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:11.827046    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:11.827107    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:11.866493    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:11.866493    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.418219    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:14.441326    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:14.471617    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.471617    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:14.475764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:14.525977    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.525977    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:14.530095    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:14.559065    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.559065    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:14.562300    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:14.591222    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.591222    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:14.595004    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:14.623409    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.623409    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:14.626892    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:14.654709    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.654709    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:14.658517    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:14.685033    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.685033    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:14.685033    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:14.685033    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:14.729797    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:14.729797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.775571    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:14.775571    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:14.837326    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:14.837326    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:14.868773    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:14.868773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:14.947701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.453450    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:17.476221    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:17.508293    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.508388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:17.512181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:17.543844    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.543844    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:17.547662    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:17.575201    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.575201    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:17.578822    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:17.606210    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.606210    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:17.609909    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:17.635671    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.635671    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:17.639317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:17.668567    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.668567    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:17.671701    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:17.698754    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.698754    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:17.698754    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:17.698835    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:17.746368    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:17.746368    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:17.807375    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:17.807375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:17.838385    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:17.838385    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:17.926603    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.926603    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:17.926648    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.475641    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:20.498334    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:20.527197    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.527197    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:20.530922    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:20.557934    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.557934    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:20.561696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:20.589458    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.589458    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:20.593618    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:20.618953    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.619013    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:20.622779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:20.650087    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.650087    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:20.653349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:20.680898    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.680898    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:20.684841    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:20.711841    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.711841    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:20.711841    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:20.711841    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:20.773325    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:20.773325    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:20.802932    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:20.802932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:20.882468    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:20.882468    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:20.882468    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.924918    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:20.924918    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:23.483925    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:23.503925    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:23.531502    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.531502    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:23.535209    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:23.566493    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.566493    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:23.569915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:23.598869    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.598869    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:23.603128    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:23.629658    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.629658    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:23.633104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:23.659718    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.659718    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:23.663327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:23.693156    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.693156    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:23.696530    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:23.727025    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.727025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:23.727025    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:23.727025    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:23.788970    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:23.788970    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:23.819732    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:23.819732    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:23.903797    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:23.903797    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:23.903797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:23.943716    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:23.943716    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:26.496986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:26.519387    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:26.546439    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.546439    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:26.550311    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:26.579658    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.579658    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:26.583767    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:26.611690    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.611690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:26.616096    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:26.642773    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.642773    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:26.646291    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:26.674086    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.674086    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:26.677423    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:26.705896    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.705896    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:26.709747    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:26.736563    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.736563    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:26.736563    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:26.736563    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:26.797921    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:26.797921    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:26.827915    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:26.827915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:26.912180    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:26.912180    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:26.912180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:26.952784    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:26.952784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.506291    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:29.528153    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:29.558126    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.558126    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:29.562358    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:29.592320    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.592320    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:29.596049    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:29.628556    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.628556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:29.632809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:29.657311    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.657311    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:29.661781    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:29.690232    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.690261    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:29.693735    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:29.722288    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.722288    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:29.725599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:29.757022    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.757022    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:29.757057    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:29.757057    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:29.838684    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:29.838684    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:29.840075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:29.881968    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:29.881968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.937264    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:29.937264    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:30.003954    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:30.003954    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:32.543156    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:32.567379    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:32.595089    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.595089    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:32.599147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:32.627893    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.627962    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:32.631484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:32.658969    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.658969    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:32.662719    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:32.689837    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.689837    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:32.693526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:32.719931    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.719931    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:32.723427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:32.754044    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.754044    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:32.757365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:32.785242    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.785242    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:32.785242    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:32.785242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:32.866344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:32.866344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:32.866344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:32.910000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:32.910000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:32.959713    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:32.959713    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:33.023739    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:33.023739    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:35.563488    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:35.587848    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:35.619497    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.619497    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:35.625107    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:35.653936    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.653936    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:35.657619    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:35.684524    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.684524    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:35.687685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:35.718759    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.718759    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:35.722575    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:35.749655    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.749655    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:35.753297    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:35.780974    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.780974    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:35.784685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:35.810182    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.810182    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:35.810182    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:35.810182    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:35.892605    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:35.892605    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:35.892605    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:35.932890    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:35.932890    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:35.985679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:35.985679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:36.046361    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:36.046361    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:38.583800    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:38.606814    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:38.638211    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.638211    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:38.642266    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:38.669848    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.669848    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:38.673886    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:38.700984    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.700984    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:38.705078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:38.729910    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.729910    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:38.733986    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:38.760705    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.760705    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:38.765121    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:38.799915    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.799915    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:38.804009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:38.833364    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.833364    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:38.833364    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:38.833364    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:38.913728    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:38.914694    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:38.914694    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:38.953812    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:38.953812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:38.999712    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:38.999712    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:39.060789    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:39.060789    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:41.597593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:41.620430    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:41.650082    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.650082    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:41.653991    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:41.681237    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.681306    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:41.684963    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:41.713795    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.713795    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:41.719712    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:41.749037    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.749037    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:41.753070    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:41.779427    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.779427    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:41.783501    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:41.815751    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.815751    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:41.819560    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:41.847881    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.847881    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:41.847881    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:41.847931    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:41.927320    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:41.927320    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:41.927320    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:41.970940    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:41.970940    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:42.027555    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:42.027555    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:42.089451    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:42.089451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.625751    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:44.648990    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:44.676551    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.676585    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:44.679722    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:44.709172    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.709172    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:44.713304    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:44.743046    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.743046    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:44.748526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:44.778521    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.778521    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:44.782734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:44.814603    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.814603    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:44.817683    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:44.845948    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.845948    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:44.849265    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:44.879812    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.879812    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:44.879812    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:44.879812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:44.944127    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:44.944127    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.974113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:44.974113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:45.057102    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:45.057102    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:45.057102    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:45.100139    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:45.100139    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.652183    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:47.675849    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:47.706239    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.706239    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:47.709475    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:47.741233    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.741233    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:47.744861    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:47.774055    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.774055    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:47.777505    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:47.805794    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.805794    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:47.808964    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:47.836392    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.836392    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:47.841779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:47.870715    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.870715    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:47.874288    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:47.901831    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.901831    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:47.901831    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:47.901831    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:47.944346    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:47.944346    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.988778    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:47.988778    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:48.052537    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:48.052537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:48.083339    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:48.083339    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:48.169498    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:50.675888    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:50.695141    1528 kubeadm.go:602] duration metric: took 4m2.9691176s to restartPrimaryControlPlane
	W1212 20:10:50.695255    1528 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:10:50.699541    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:10:51.173784    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:51.196593    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:51.210961    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:51.215040    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:51.228862    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:51.228862    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:51.232787    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:10:51.246730    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:51.251357    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:51.268580    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:10:51.283713    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:51.288367    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:51.308779    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.322868    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:51.327510    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.347243    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:10:51.360015    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:51.365274    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:51.383196    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:51.503494    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:10:51.590365    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:10:51.685851    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:14:52.890657    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:14:52.890657    1528 kubeadm.go:319] 
	I1212 20:14:52.891189    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:14:52.897133    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:14:52.897133    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:14:52.898464    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:14:52.898582    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:14:52.898779    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:14:52.898920    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:14:52.899045    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:14:52.899131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:14:52.899262    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:14:52.899432    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:14:52.899517    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:14:52.899644    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:14:52.899729    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:14:52.899847    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:14:52.900038    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:14:52.900217    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:14:52.900390    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:14:52.900502    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:14:52.900574    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:14:52.900710    1528 kubeadm.go:319] OS: Linux
	I1212 20:14:52.900833    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:14:52.900915    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:14:52.901708    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:14:52.901818    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:14:52.906810    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:14:52.908849    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:14:52.908909    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:14:52.912070    1528 out.go:252]   - Booting up control plane ...
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:14:52.914083    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000441542s
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 
	W1212 20:14:52.915069    1528 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:14:52.921774    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:14:53.390305    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:14:53.408818    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:14:53.413243    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:14:53.425325    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:14:53.425325    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:14:53.430625    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:14:53.442895    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:14:53.446965    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:14:53.464658    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:14:53.478038    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:14:53.482805    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:14:53.499083    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.513919    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:14:53.518566    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.538555    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:14:53.552479    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:14:53.557205    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:14:53.576642    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:14:53.698383    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:14:53.775189    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:14:53.868267    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:18:54.359522    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:18:54.359522    1528 kubeadm.go:319] 
	I1212 20:18:54.359522    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:18:54.362954    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:18:54.363173    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:18:54.363383    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:18:54.363609    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:18:54.364132    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:18:54.364950    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:18:54.365662    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:18:54.365743    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:18:54.365828    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:18:54.365917    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:18:54.366005    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:18:54.366087    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:18:54.366168    1528 kubeadm.go:319] OS: Linux
	I1212 20:18:54.366224    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:18:54.366255    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:18:54.366823    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:18:54.366960    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:18:54.367127    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:18:54.367127    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:18:54.369422    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:18:54.369953    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:18:54.370159    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:18:54.370228    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:18:54.370309    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:18:54.370471    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:18:54.370639    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:18:54.371251    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:18:54.371313    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:18:54.371344    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:18:54.374291    1528 out.go:252]   - Booting up control plane ...
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:18:54.375259    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000961807s
	I1212 20:18:54.375259    1528 kubeadm.go:319] 
	I1212 20:18:54.376246    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:18:54.376246    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:403] duration metric: took 12m6.6943451s to StartCluster
	I1212 20:18:54.376405    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:18:54.380250    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:18:54.441453    1528 cri.go:89] found id: ""
	I1212 20:18:54.441453    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.441453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:18:54.441453    1528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:18:54.446414    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:18:54.508794    1528 cri.go:89] found id: ""
	I1212 20:18:54.508794    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.508794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:18:54.508794    1528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:18:54.513698    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:18:54.553213    1528 cri.go:89] found id: ""
	I1212 20:18:54.553257    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.553257    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:18:54.553295    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:18:54.558235    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:18:54.603262    1528 cri.go:89] found id: ""
	I1212 20:18:54.603262    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.603262    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:18:54.603262    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:18:54.608185    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:18:54.648151    1528 cri.go:89] found id: ""
	I1212 20:18:54.648151    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.648151    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:18:54.648151    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:18:54.652647    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:18:54.693419    1528 cri.go:89] found id: ""
	I1212 20:18:54.693419    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.693419    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:18:54.693419    1528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:18:54.697661    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:18:54.737800    1528 cri.go:89] found id: ""
	I1212 20:18:54.737800    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.737800    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:18:54.737858    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:18:54.737858    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:18:54.790460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:18:54.790460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:18:54.852887    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:18:54.852887    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:18:54.883744    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:18:54.883744    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:18:54.965870    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:18:54.965870    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:18:54.965870    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 20:18:55.009075    1528 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.009075    1528 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.011173    1528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:18:55.016858    1528 out.go:203] 
	W1212 20:18:55.021226    1528 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.021226    1528 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:18:55.021226    1528 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:18:55.024694    1528 out.go:203] 
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:22:20.944174   45808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:20.945781   45808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:20.947100   45808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:20.948749   45808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:20.951480   45808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:22:21 up  1:23,  0 user,  load average: 0.49, 0.42, 0.45
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:22:17 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 591.
	Dec 12 20:22:18 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:18 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:18 functional-468800 kubelet[45525]: E1212 20:22:18.198041   45525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 592.
	Dec 12 20:22:18 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:18 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:18 functional-468800 kubelet[45536]: E1212 20:22:18.938875   45536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:18 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:19 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 593.
	Dec 12 20:22:19 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:19 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:19 functional-468800 kubelet[45570]: E1212 20:22:19.707804   45570 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:19 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:19 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:20 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 594.
	Dec 12 20:22:20 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:20 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:20 functional-468800 kubelet[45698]: E1212 20:22:20.436589   45698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:20 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:20 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (620.6376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (124.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55778/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (585.999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (577.5715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.0270949s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image save kicbase/echo-server:functional-468800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image rm kicbase/echo-server:functional-468800 --alsologtostderr                                                                        │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image save --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ start          │ -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start          │ -p functional-468800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start          │ -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-468800 --alsologtostderr -v=1                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image ls --format short --alsologtostderr                                                                                               │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ ssh            │ functional-468800 ssh pgrep buildkitd                                                                                                                     │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ image          │ functional-468800 image ls --format yaml --alsologtostderr                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image build -t localhost/my-image:functional-468800 testdata\build --alsologtostderr                                                    │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image ls --format json --alsologtostderr                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image ls --format table --alsologtostderr                                                                                               │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:22:23
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:22:23.221121    3452 out.go:360] Setting OutFile to fd 1996 ...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.279090    3452 out.go:374] Setting ErrFile to fd 1012...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.304610    3452 out.go:368] Setting JSON to false
	I1212 20:22:23.307604    3452 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5081,"bootTime":1765565862,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:22:23.307604    3452 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:22:23.311607    3452 out.go:179] * [functional-468800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:22:23.312604    3452 notify.go:221] Checking for updates...
	I1212 20:22:23.315613    3452 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:22:23.317610    3452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:22:23.319615    3452 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:22:23.322604    3452 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:22:23.324597    3452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:22:23.190445   11460 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:23.191446   11460 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:23.314613   11460 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:23.318612   11460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.584928   11460 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.558479431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.588921   11460 out.go:179] * Using the docker driver based on existing profile
	I1212 20:22:23.326596    3452 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:23.327598    3452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:23.481603    3452 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:23.484608    3452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.737731    3452 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.719883179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.742724    3452 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:22:23.744723    3452 start.go:309] selected driver: docker
	I1212 20:22:23.744723    3452 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.744723    3452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:23.781732    3452 out.go:203] 
	W1212 20:22:23.784721    3452 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:22:23.786731    3452 out.go:203] 
	I1212 20:22:23.590929   11460 start.go:309] selected driver: docker
	I1212 20:22:23.590929   11460 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.590929   11460 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:23.598921   11460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.834212   11460 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.818279187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.869216   11460 cni.go:84] Creating CNI manager for ""
	I1212 20:22:23.869216   11460 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:22:23.869216   11460 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.874216   11460 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:22:29 functional-468800 dockerd[21655]: time="2025-12-12T20:22:29.079167261Z" level=info msg="sbJoin: gwep4 ''->'b0567f43a52d', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:24:20.109722   48373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:24:20.112493   48373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:24:20.114986   48373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:24:20.116077   48373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:24:20.117356   48373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:24:20 up  1:25,  0 user,  load average: 0.27, 0.38, 0.44
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:24:16 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:24:17 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 750.
	Dec 12 20:24:17 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:17 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:17 functional-468800 kubelet[48194]: E1212 20:24:17.437274   48194 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:24:17 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:24:17 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 751.
	Dec 12 20:24:18 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:18 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:18 functional-468800 kubelet[48206]: E1212 20:24:18.184575   48206 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 752.
	Dec 12 20:24:18 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:18 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:18 functional-468800 kubelet[48233]: E1212 20:24:18.944217   48233 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:24:18 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:24:19 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 753.
	Dec 12 20:24:19 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:19 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:24:19 functional-468800 kubelet[48264]: E1212 20:24:19.687015   48264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:24:19 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:24:19 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (581.0919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-468800 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-468800 replace --force -f testdata\mysql.yaml: exit status 1 (20.2155542s)

                                                
                                                
** stderr ** 
	E1212 20:21:20.640717    9288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:30.721679    9288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:55778/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:55778/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-468800 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (588.8211ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.2807292s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service    │ functional-468800 service hello-node --url                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ ssh        │ functional-468800 ssh -n functional-468800 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ tunnel     │ functional-468800 tunnel --alsologtostderr                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ addons     │ functional-468800 addons list                                                                                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ addons     │ functional-468800 addons list -o json                                                                                                                     │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/13396.pem                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/13396.pem                                                                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/133962.pem                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /usr/share/ca-certificates/133962.pem                                                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ docker-env │ functional-468800 docker-env                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh        │ functional-468800 ssh sudo cat /etc/test/nested/copy/13396/hosts                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image save kicbase/echo-server:functional-468800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image rm kicbase/echo-server:functional-468800 --alsologtostderr                                                                        │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image      │ functional-468800 image save --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:06:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:06:38.727985    1528 out.go:360] Setting OutFile to fd 1056 ...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.773098    1528 out.go:374] Setting ErrFile to fd 1212...
	I1212 20:06:38.773098    1528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:06:38.787709    1528 out.go:368] Setting JSON to false
	I1212 20:06:38.790304    1528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4136,"bootTime":1765565861,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:06:38.790304    1528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:06:38.796304    1528 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:06:38.800290    1528 notify.go:221] Checking for updates...
	I1212 20:06:38.800290    1528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:06:38.802303    1528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:06:38.805306    1528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:06:38.807332    1528 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:06:38.808856    1528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:06:38.812430    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:38.812430    1528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:06:38.929707    1528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:06:38.933677    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.195122    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.177384092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.201119    1528 out.go:179] * Using the docker driver based on existing profile
	I1212 20:06:39.203117    1528 start.go:309] selected driver: docker
	I1212 20:06:39.203117    1528 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.203117    1528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:06:39.209122    1528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:06:39.449342    1528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-12 20:06:39.430307853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:06:39.528922    1528 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:06:39.529468    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:39.529468    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:39.529468    1528 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:39.533005    1528 out.go:179] * Starting "functional-468800" primary control-plane node in "functional-468800" cluster
	I1212 20:06:39.535095    1528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 20:06:39.537607    1528 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:06:39.540959    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:39.540959    1528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:06:39.540959    1528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 20:06:39.540959    1528 cache.go:65] Caching tarball of preloaded images
	I1212 20:06:39.541554    1528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 20:06:39.541554    1528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 20:06:39.541554    1528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\config.json ...
	I1212 20:06:39.619509    1528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:06:39.619509    1528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:06:39.619509    1528 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:06:39.619509    1528 start.go:360] acquireMachinesLock for functional-468800: {Name:mk7e7177bdfcb5a7969561474f8bb14fa15c1eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:06:39.619509    1528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-468800"
	I1212 20:06:39.620041    1528 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:06:39.620041    1528 fix.go:54] fixHost starting: 
	I1212 20:06:39.627157    1528 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
	I1212 20:06:39.683014    1528 fix.go:112] recreateIfNeeded on functional-468800: state=Running err=<nil>
	W1212 20:06:39.683376    1528 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:06:39.686124    1528 out.go:252] * Updating the running docker "functional-468800" container ...
	I1212 20:06:39.686124    1528 machine.go:94] provisionDockerMachine start ...
	I1212 20:06:39.689814    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.744908    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.745476    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.745476    1528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:06:39.930965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:39.931078    1528 ubuntu.go:182] provisioning hostname "functional-468800"
	I1212 20:06:39.934795    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:39.989752    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:39.990452    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:39.990452    1528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-468800 && echo "functional-468800" | sudo tee /etc/hostname
	I1212 20:06:40.176756    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-468800
	
	I1212 20:06:40.180410    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.235554    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.236742    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.236742    1528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-468800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-468800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-468800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:06:40.410965    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:40.410965    1528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 20:06:40.410965    1528 ubuntu.go:190] setting up certificates
	I1212 20:06:40.410965    1528 provision.go:84] configureAuth start
	I1212 20:06:40.414835    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:40.468680    1528 provision.go:143] copyHostCerts
	I1212 20:06:40.468680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 20:06:40.468680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 20:06:40.468680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 20:06:40.469680    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 20:06:40.469680    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 20:06:40.469680    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 20:06:40.470682    1528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 20:06:40.470682    1528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 20:06:40.470682    1528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 20:06:40.471679    1528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-468800 san=[127.0.0.1 192.168.49.2 functional-468800 localhost minikube]
	I1212 20:06:40.521679    1528 provision.go:177] copyRemoteCerts
	I1212 20:06:40.526217    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:06:40.529224    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.578843    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:40.705122    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:06:40.732235    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:06:40.758034    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:06:40.787536    1528 provision.go:87] duration metric: took 376.5012ms to configureAuth
	I1212 20:06:40.787564    1528 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:06:40.788016    1528 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:06:40.791899    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:40.847433    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:40.847433    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:40.847433    1528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 20:06:41.031514    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 20:06:41.031514    1528 ubuntu.go:71] root file system type: overlay
	I1212 20:06:41.031514    1528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 20:06:41.035525    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.089326    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.090065    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.090155    1528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 20:06:41.283431    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 20:06:41.287473    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.343081    1528 main.go:143] libmachine: Using SSH client type: native
	I1212 20:06:41.343562    1528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 55779 <nil> <nil>}
	I1212 20:06:41.343562    1528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 20:06:41.525616    1528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:06:41.525616    1528 machine.go:97] duration metric: took 1.8394714s to provisionDockerMachine
	I1212 20:06:41.525616    1528 start.go:293] postStartSetup for "functional-468800" (driver="docker")
	I1212 20:06:41.525616    1528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:06:41.530519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:06:41.534083    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.586502    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.720007    1528 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:06:41.727943    1528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:06:41.727943    1528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 20:06:41.727943    1528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 20:06:41.728602    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 20:06:41.729437    1528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts -> hosts in /etc/test/nested/copy/13396
	I1212 20:06:41.733519    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13396
	I1212 20:06:41.745958    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 20:06:41.772738    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts --> /etc/test/nested/copy/13396/hosts (40 bytes)
	I1212 20:06:41.802626    1528 start.go:296] duration metric: took 277.0071ms for postStartSetup
	I1212 20:06:41.807164    1528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:06:41.809505    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:41.864695    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:41.985729    1528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:06:41.994649    1528 fix.go:56] duration metric: took 2.3745808s for fixHost
	I1212 20:06:41.994649    1528 start.go:83] releasing machines lock for "functional-468800", held for 2.3751133s
	I1212 20:06:41.998707    1528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-468800
	I1212 20:06:42.059230    1528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 20:06:42.063903    1528 ssh_runner.go:195] Run: cat /version.json
	I1212 20:06:42.063903    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.066691    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:42.116356    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	I1212 20:06:42.117357    1528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
	W1212 20:06:42.228585    1528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 20:06:42.232646    1528 ssh_runner.go:195] Run: systemctl --version
	I1212 20:06:42.247485    1528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:06:42.257236    1528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:06:42.263875    1528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:06:42.279473    1528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:06:42.279473    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.279473    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.283549    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:42.307873    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:06:42.326439    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:06:42.341366    1528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:06:42.345268    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 20:06:42.347179    1528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 20:06:42.347179    1528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 20:06:42.365551    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.385740    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:06:42.407021    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:06:42.427172    1528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:06:42.448213    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:06:42.467444    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:06:42.487296    1528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:06:42.507050    1528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:06:42.524437    1528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:06:42.541928    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:42.701987    1528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:06:42.867618    1528 start.go:496] detecting cgroup driver to use...
	I1212 20:06:42.867618    1528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:06:42.872524    1528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 20:06:42.900833    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:42.922770    1528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:06:42.982495    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:06:43.005292    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:06:43.026719    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:06:43.052829    1528 ssh_runner.go:195] Run: which cri-dockerd
	I1212 20:06:43.064606    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 20:06:43.079549    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 20:06:43.104999    1528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 20:06:43.240280    1528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 20:06:43.379193    1528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 20:06:43.379358    1528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 20:06:43.405761    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 20:06:43.427392    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:43.565288    1528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 20:06:44.374705    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:06:44.396001    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 20:06:44.418749    1528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 20:06:44.445721    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:44.466663    1528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 20:06:44.598807    1528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 20:06:44.740962    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:44.883493    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 20:06:44.907977    1528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 20:06:44.931006    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.071046    1528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 20:06:45.171465    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 20:06:45.190143    1528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 20:06:45.194535    1528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 20:06:45.202518    1528 start.go:564] Will wait 60s for crictl version
	I1212 20:06:45.206873    1528 ssh_runner.go:195] Run: which crictl
	I1212 20:06:45.221614    1528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:06:45.263002    1528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 20:06:45.266767    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.308717    1528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 20:06:45.348580    1528 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 20:06:45.352493    1528 cli_runner.go:164] Run: docker exec -t functional-468800 dig +short host.docker.internal
	I1212 20:06:45.482840    1528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 20:06:45.487311    1528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 20:06:45.498523    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:45.552748    1528 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:06:45.554383    1528 kubeadm.go:884] updating cluster {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:06:45.554933    1528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 20:06:45.558499    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.589105    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.589105    1528 docker.go:621] Images already preloaded, skipping extraction
	I1212 20:06:45.592742    1528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 20:06:45.625313    1528 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-468800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1212 20:06:45.625313    1528 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:06:45.625313    1528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1212 20:06:45.625829    1528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-468800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:06:45.629232    1528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 20:06:45.698056    1528 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:06:45.698078    1528 cni.go:84] Creating CNI manager for ""
	I1212 20:06:45.698133    1528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:06:45.698180    1528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:06:45.698180    1528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-468800 NodeName:functional-468800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:06:45.698180    1528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-468800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:06:45.702170    1528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:06:45.714209    1528 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:06:45.719390    1528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:06:45.731628    1528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 20:06:45.753236    1528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:06:45.772644    1528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1212 20:06:45.798125    1528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:06:45.809796    1528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:06:45.998447    1528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:06:46.682417    1528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800 for IP: 192.168.49.2
	I1212 20:06:46.682417    1528 certs.go:195] generating shared ca certs ...
	I1212 20:06:46.682417    1528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:06:46.683216    1528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 20:06:46.683331    1528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 20:06:46.683331    1528 certs.go:257] generating profile certs ...
	I1212 20:06:46.683996    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\client.key
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key.a2fee78d
	I1212 20:06:46.684112    1528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 20:06:46.685029    1528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 20:06:46.685029    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 20:06:46.685554    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 20:06:46.685624    1528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 20:06:46.686999    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:06:46.715172    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:06:46.745329    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:06:46.775248    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:06:46.804288    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:06:46.833541    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:06:46.858974    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:06:46.883320    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-468800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:06:46.912462    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:06:46.937010    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 20:06:46.963968    1528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 20:06:46.987545    1528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:06:47.014201    1528 ssh_runner.go:195] Run: openssl version
	I1212 20:06:47.028684    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.047532    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:06:47.066889    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.074545    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.078818    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:06:47.128719    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:06:47.145523    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.162300    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 20:06:47.179220    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.188551    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.193732    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 20:06:47.241331    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:06:47.258219    1528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.276085    1528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 20:06:47.293199    1528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.300084    1528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.304026    1528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 20:06:47.352991    1528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:06:47.371677    1528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:06:47.384558    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:06:47.433291    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:06:47.480566    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:06:47.530653    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:06:47.582068    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:06:47.630287    1528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:06:47.673527    1528 kubeadm.go:401] StartCluster: {Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:06:47.678147    1528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.710789    1528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:06:47.723256    1528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:06:47.723256    1528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:06:47.727283    1528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:06:47.740989    1528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.744500    1528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
	I1212 20:06:47.805147    1528 kubeconfig.go:125] found "functional-468800" server: "https://127.0.0.1:55778"
	I1212 20:06:47.813022    1528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:06:47.830078    1528 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:49:17.606323144 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:06:45.789464240 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:06:47.830078    1528 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:06:47.833739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 20:06:47.872403    1528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:06:47.898698    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:06:47.911626    1528 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 19:53 /etc/kubernetes/scheduler.conf
	
	I1212 20:06:47.916032    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:06:47.934293    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:06:47.947871    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.952020    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:06:47.971701    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:06:47.986795    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:47.991166    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:06:48.008021    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:06:48.023761    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:06:48.029138    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:06:48.047659    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:06:48.063995    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.141323    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.685789    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:48.933405    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.007626    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:06:49.088118    1528 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:06:49.091668    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:49.594772    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.093859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:50.594422    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.093806    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:51.593915    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.093893    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:52.594038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.093417    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:53.593495    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:54.594146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.095283    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:55.594629    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.094166    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:56.593508    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.093792    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:57.594191    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.094043    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:58.593447    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.095461    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:06:59.594593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.093887    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:00.593742    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.093796    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:01.593635    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:02.594164    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.094112    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:03.593477    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.093750    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:04.595391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.094206    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:05.595179    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.094740    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:06.594021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.092923    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:07.594420    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.093543    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:08.593353    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.093866    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:09.594009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:10.593564    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.094124    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:11.594786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.093907    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:12.595728    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.095070    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:13.594017    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.094874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:14.595001    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.094580    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:15.594646    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.095074    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:16.594850    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.094067    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:17.594147    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.094262    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:18.594277    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.094229    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:19.593986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.093873    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:20.593102    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.093881    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:21.594308    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.093613    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:22.594040    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.094021    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:23.594274    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.093605    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:24.594142    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.094736    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:25.593265    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.094197    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:26.594872    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.095670    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:27.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.093920    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:28.596679    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.094004    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:29.594458    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.093715    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:30.594515    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.094349    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:31.594711    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.094230    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:32.594083    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.093810    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:33.595024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.094786    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:34.594107    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.094421    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:35.594761    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.095704    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:36.596396    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.094385    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:37.593669    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.094137    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:38.595560    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.094405    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:39.595146    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.094116    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:40.595721    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.096666    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:41.595141    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.094696    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:42.595232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.094232    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:43.595329    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.094121    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:44.594251    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.094024    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:45.594712    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.093802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:46.594279    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.094868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:47.594370    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.093917    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:48.594667    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:49.093256    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:49.126325    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.126325    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:49.130353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:49.158022    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.158022    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:49.162811    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:49.190525    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.190525    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:49.194310    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:49.220030    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.220030    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:49.223677    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:49.249986    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.249986    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:49.253970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:49.282441    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.282441    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:49.286057    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:49.315225    1528 logs.go:282] 0 containers: []
	W1212 20:07:49.315248    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:49.315306    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:49.315306    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:49.374436    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:49.374436    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:49.404204    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:49.404204    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:49.493575    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:49.481243   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.484599   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.485624   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.487104   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:49.488000   23694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:49.493575    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:49.493575    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:49.537752    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:49.537752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.109985    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:52.133820    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:52.164388    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.164388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:52.168109    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:52.195605    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.195605    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:52.199164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:52.229188    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.229188    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:52.232745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:52.256990    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.256990    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:52.261539    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:52.290862    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.290862    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:52.294555    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:52.324957    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.324957    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:52.330284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:52.359197    1528 logs.go:282] 0 containers: []
	W1212 20:07:52.359197    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:52.359197    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:52.359197    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:52.386524    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:52.386524    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:52.470690    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:52.461396   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.462458   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.463681   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.464521   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:52.466051   23854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:52.470690    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:52.470690    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:52.511513    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:52.511513    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:52.560676    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:52.560676    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.127058    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:55.150663    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:55.181456    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.181456    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:55.184641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:55.217269    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.217269    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:55.220911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:55.250346    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.250346    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:55.254082    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:55.285676    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.285706    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:55.288968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:55.315854    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.315854    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:55.319386    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:55.348937    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.348937    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:55.352894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:55.380789    1528 logs.go:282] 0 containers: []
	W1212 20:07:55.380853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:55.380853    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:55.380883    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:55.463944    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:55.453644   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.455074   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.457094   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.458003   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:55.461398   23998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:55.463944    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:55.463944    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:07:55.507780    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:55.507780    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:55.561906    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:55.561906    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:55.623372    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:55.623372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.160009    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:07:58.184039    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:07:58.215109    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.215109    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:58.218681    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:07:58.247778    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.247778    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:07:58.251301    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:07:58.278710    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.278710    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:07:58.282296    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:07:58.308953    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.308953    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:58.312174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:07:58.339973    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.340049    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:58.343731    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:07:58.374943    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.374943    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:58.378660    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:07:58.405372    1528 logs.go:282] 0 containers: []
	W1212 20:07:58.405372    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:58.405372    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:07:58.405372    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:58.453718    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:58.453718    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:58.514502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:58.514502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:58.544394    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:58.544394    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:58.623232    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:58.613437   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.615268   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.617776   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.618728   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:58.619935   24162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:58.623232    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:07:58.623232    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.169113    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:01.192583    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:01.222434    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.222434    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:01.225873    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:01.253020    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.253020    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:01.257395    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:01.286407    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.286407    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:01.290442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:01.317408    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.317408    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:01.321138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:01.348820    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.348820    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:01.352926    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:01.383541    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.383541    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:01.387373    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:01.415400    1528 logs.go:282] 0 containers: []
	W1212 20:08:01.415431    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:01.415431    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:01.415466    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:01.481183    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:01.481183    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:01.512132    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:01.512132    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:01.598560    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:01.586562   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.587448   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.590300   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.591224   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:01.593552   24297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:01.598601    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:01.598601    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:01.641848    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:01.641848    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.202764    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:04.225393    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:04.257048    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.257048    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:04.261463    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:04.289329    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.289329    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:04.295911    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:04.324136    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.324205    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:04.329272    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:04.355941    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.355941    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:04.359744    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:04.389386    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.389461    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:04.393063    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:04.421465    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.421465    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:04.425377    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:04.454159    1528 logs.go:282] 0 containers: []
	W1212 20:08:04.454159    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:04.454185    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:04.454221    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:04.499238    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:04.499238    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:04.546668    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:04.546668    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:04.614181    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:04.614181    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:04.646155    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:04.646155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:04.746527    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:04.735346   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.736502   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.738096   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.740089   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:04.741574   24466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.252038    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:07.276838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:07.307770    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.307770    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:07.311473    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:07.338086    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.338086    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:07.343809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:07.373687    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.373687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:07.377399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:07.406083    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.406083    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:07.409835    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:07.437651    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.437651    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:07.441428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:07.468369    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.468369    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:07.472164    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:07.503047    1528 logs.go:282] 0 containers: []
	W1212 20:08:07.503047    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:07.503047    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:07.503811    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:07.531856    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:07.531856    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:07.618451    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:07.605790   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.608413   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611125   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.611805   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:07.614027   24598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:07.618451    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:07.618451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:07.661072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:07.661072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:07.708185    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:07.708185    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.277741    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:10.301882    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:10.334646    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.334646    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:10.338176    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:10.369543    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.369543    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:10.372853    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:10.405159    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.405159    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:10.408623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:10.436491    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.436491    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:10.440653    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:10.471674    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.471674    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:10.475616    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:10.503923    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.503923    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:10.507960    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:10.532755    1528 logs.go:282] 0 containers: []
	W1212 20:08:10.532755    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:10.532755    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:10.532755    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:10.596502    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:10.596502    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:10.627352    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:10.627352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:10.716582    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:10.705824   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.707137   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.709479   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.710611   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:10.712209   24749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:10.716582    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:10.716582    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:10.758177    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:10.758177    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.312261    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:13.336629    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:13.366321    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.366321    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:13.370440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:13.398643    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.398643    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:13.402381    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:13.432456    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.432481    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:13.436213    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:13.464635    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.464711    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:13.468308    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:13.495284    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.495284    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:13.499271    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:13.528325    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.528325    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:13.531787    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:13.562227    1528 logs.go:282] 0 containers: []
	W1212 20:08:13.562227    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:13.562227    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:13.562227    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:13.663593    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:13.651715   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.652789   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.654090   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.658182   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:13.659318   24891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:13.663593    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:13.663593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:13.704702    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:13.704702    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:13.753473    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:13.753473    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:13.816534    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:13.816534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.353541    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:16.376390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:16.407214    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.407214    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:16.410992    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:16.441225    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.441225    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:16.444710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:16.474803    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.474803    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:16.478736    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:16.507490    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.507490    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:16.510890    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:16.542100    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.542196    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:16.546032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:16.575799    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.575799    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:16.579959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:16.607409    1528 logs.go:282] 0 containers: []
	W1212 20:08:16.607409    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:16.607409    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:16.607409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:16.635159    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:16.635159    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:16.716319    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:16.704484   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.705339   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.707697   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.710003   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:16.712279   25043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:16.716319    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:16.716319    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:16.759176    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:16.759176    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:16.808150    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:16.808180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.374586    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:19.397466    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:19.428699    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.428699    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:19.432104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:19.459357    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.459357    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:19.463506    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:19.492817    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.492862    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:19.496262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:19.524604    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.524633    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:19.528245    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:19.554030    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.554030    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:19.557659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:19.585449    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.585449    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:19.589270    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:19.617715    1528 logs.go:282] 0 containers: []
	W1212 20:08:19.617715    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:19.617715    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:19.617715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:19.665679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:19.665679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:19.731378    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:19.731378    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:19.760660    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:19.760660    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:19.846488    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:19.835706   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.837312   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.839548   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.840426   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:19.842652   25223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:19.846488    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:19.846534    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.396054    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:22.420446    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:22.451208    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.451246    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:22.455255    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:22.482900    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.482900    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:22.486411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:22.515383    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.515383    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:22.518824    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:22.550034    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.550034    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:22.553623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:22.581020    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.581020    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:22.585628    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:22.612869    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.612869    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:22.616928    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:22.644472    1528 logs.go:282] 0 containers: []
	W1212 20:08:22.644472    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:22.644472    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:22.644472    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:22.708075    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:22.708075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:22.738243    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:22.738270    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:22.821664    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:22.811477   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.812684   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.813664   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.815983   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:22.818141   25358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:22.821664    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:22.821664    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:22.864165    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:22.864165    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.420933    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:25.445913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:25.482750    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.482780    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:25.486866    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:25.513327    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.513327    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:25.516888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:25.544296    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.544296    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:25.547411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:25.577831    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.577831    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:25.581764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:25.611577    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.611577    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:25.614994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:25.643683    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.643683    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:25.647543    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:25.673764    1528 logs.go:282] 0 containers: []
	W1212 20:08:25.673764    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:25.673764    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:25.673764    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:25.756845    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:25.747568   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.748735   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.749881   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.750867   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:25.753666   25498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:25.756845    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:25.756845    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:25.796355    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:25.796355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:25.848330    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:25.848330    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:25.908271    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:25.908271    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:28.444198    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:28.466730    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:28.495218    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.496317    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:28.499838    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:28.526946    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.526946    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:28.531098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:28.558957    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.558957    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:28.563084    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:28.591401    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.591401    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:28.594622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:28.621536    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.621536    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:28.625599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:28.652819    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.652819    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:28.655938    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:28.684007    1528 logs.go:282] 0 containers: []
	W1212 20:08:28.684007    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:28.684049    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:28.684049    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:28.766993    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:28.757413   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.758465   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.762706   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.763681   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:28.764948   25653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:28.766993    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:28.766993    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:28.808427    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:28.808427    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:28.854005    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:28.854005    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:28.915072    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:28.915072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.448340    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:31.482817    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:31.516888    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.516948    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:31.520762    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:31.548829    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.548829    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:31.552634    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:31.580202    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.580202    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:31.583832    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:31.612644    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.612644    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:31.616408    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:31.641662    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.641662    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:31.645105    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:31.674858    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.674858    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:31.678481    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:31.708742    1528 logs.go:282] 0 containers: []
	W1212 20:08:31.708742    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:31.708742    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:31.708742    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:31.737537    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:31.737537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:31.815915    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:31.804811   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.805789   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.806945   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.808415   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:31.809568   25813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:31.815915    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:31.815915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:31.855387    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:31.855387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:31.902882    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:31.902882    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.468874    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:34.492525    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:34.524158    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.524158    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:34.528390    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:34.555356    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.555356    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:34.558734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:34.589102    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.589171    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:34.592795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:34.621829    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.621829    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:34.625204    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:34.653376    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.653376    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:34.657009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:34.683738    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.683738    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:34.686742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:34.714674    1528 logs.go:282] 0 containers: []
	W1212 20:08:34.714674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:34.714674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:34.714674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:34.779026    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:34.779026    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:34.808978    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:34.808978    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:34.892063    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:34.879879   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.880859   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883101   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.883949   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:34.886570   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:34.892063    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:34.892063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:34.931531    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:34.931531    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:37.485139    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:37.507669    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:37.539156    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.539156    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:37.543011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:37.573040    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.573040    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:37.576524    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:37.606845    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.606845    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:37.610640    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:37.637362    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.637362    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:37.640345    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:37.667170    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.667203    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:37.670535    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:37.699517    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.699517    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:37.703317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:37.728898    1528 logs.go:282] 0 containers: []
	W1212 20:08:37.728898    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:37.728898    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:37.728898    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:37.794369    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:37.794369    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:37.824287    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:37.824287    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:37.909344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:37.898187   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.899265   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.900556   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.901945   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:37.904063   26119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:37.909344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:37.909344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:37.954162    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:37.954162    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.506487    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:40.531085    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:40.562228    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.562228    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:40.566239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:40.592782    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.592782    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:40.597032    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:40.623771    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.623771    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:40.627181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:40.653272    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.653272    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:40.657007    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:40.684331    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.684331    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:40.687951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:40.717873    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.718396    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:40.722742    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:40.750968    1528 logs.go:282] 0 containers: []
	W1212 20:08:40.750968    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:40.750968    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:40.750968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:40.780652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:40.780652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.862566    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:40.851236   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.852305   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.854181   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.856807   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:40.857931   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.862566    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:40.862566    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:40.901731    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.901731    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.950141    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.950141    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.517065    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:43.542117    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:43.570769    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.570769    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:43.574614    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:43.606209    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.606209    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:43.610144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:43.636742    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.636742    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:43.640713    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:43.671147    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.671166    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:43.675284    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:43.702707    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.702707    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.709331    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:43.739560    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.739560    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:43.743495    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:43.773460    1528 logs.go:282] 0 containers: []
	W1212 20:08:43.773460    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.773460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.773460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.839426    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.839426    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.869067    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.869067    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.956418    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:43.945790   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.947881   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.949529   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.951167   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:43.952658   26421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.956418    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:43.956418    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:43.999225    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.999225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.559969    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:46.583306    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:46.616304    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.616304    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:46.620185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:46.649980    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.649980    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.653901    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:46.679706    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.679706    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.683349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:46.709377    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.709377    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:46.713435    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:46.743714    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.743714    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.747353    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:46.774831    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.774831    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:46.778444    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:46.803849    1528 logs.go:282] 0 containers: []
	W1212 20:08:46.803849    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.803849    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:46.803849    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:46.846976    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.898873    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.898873    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.960800    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.960800    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.992131    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.992131    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:47.078211    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:47.065571   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.069068   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.070187   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.071256   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:47.072132   26586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.584391    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:49.609888    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:49.644530    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.644530    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:49.648078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:49.676237    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.676237    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.680633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:49.711496    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.711496    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.714503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:49.741598    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.741598    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:49.746023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:49.774073    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.774073    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.780499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:49.807422    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.807422    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:49.811492    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:49.837105    1528 logs.go:282] 0 containers: []
	W1212 20:08:49.837105    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.837105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.837105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.919888    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:49.910085   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.911433   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.912870   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.913900   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:49.914973   26714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.919888    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:49.919888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:49.961375    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.961375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:50.029040    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:50.029040    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:50.091715    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:50.091715    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:52.626760    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:52.650138    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:52.682125    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.682125    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:52.685499    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:52.716677    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.716677    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.720251    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:52.750215    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.750215    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.753203    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:52.783410    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.783410    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:52.786745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:52.816028    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.816028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.819028    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:52.847808    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.847808    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:52.851676    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:52.880388    1528 logs.go:282] 0 containers: []
	W1212 20:08:52.880388    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.880388    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:52.880388    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:52.927060    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.927060    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.980540    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.980540    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.040013    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.040013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.068682    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.068682    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:53.153542    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:53.142301   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.143114   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.146316   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.148645   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:53.149541   26893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:55.659454    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:55.682885    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:55.711696    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.711696    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:55.718399    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:55.746229    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.746229    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.750441    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:55.780178    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.780210    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.784012    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:55.811985    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.811985    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:55.816792    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:55.847996    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.847996    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:55.851745    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:55.883521    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.883521    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:55.886915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:55.914853    1528 logs.go:282] 0 containers: []
	W1212 20:08:55.914853    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:55.914853    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:55.914853    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:55.960920    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:55.960920    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.026011    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.026011    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.053113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.053113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.136578    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:56.126520   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.127364   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.130322   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.131490   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:56.132838   27039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:56.136578    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:56.136578    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:08:58.683199    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:08:58.705404    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:08:58.735584    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.735584    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:08:58.739795    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:08:58.770569    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.770569    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:08:58.774526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:08:58.804440    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.804440    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:08:58.808498    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:08:58.836009    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.836009    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:08:58.840208    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:08:58.869192    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.869192    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:58.872945    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:08:58.902237    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.902237    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:08:58.905993    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:08:58.933450    1528 logs.go:282] 0 containers: []
	W1212 20:08:58.933617    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:58.933617    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:08:58.933617    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:58.976315    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:58.976391    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:59.038199    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.038199    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.068976    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.068976    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.160516    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:08:59.150031   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.151396   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.152924   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.154293   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:08:59.156675   27187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.160516    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:08:59.160516    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:01.709859    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:01.733860    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:01.762957    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.762957    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:01.766889    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:01.793351    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.793351    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:01.797156    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:01.823801    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.823801    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:01.827545    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:01.858811    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.858811    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:01.862667    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:01.888526    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.888601    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:01.892330    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:01.921800    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.921834    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:01.925710    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:01.954630    1528 logs.go:282] 0 containers: []
	W1212 20:09:01.954630    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:01.954630    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:01.954630    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.019929    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.019929    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.050304    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.050304    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.137016    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:02.125446   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.126668   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.127557   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.130278   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:02.131874   27323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.137016    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:02.137016    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:02.181380    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.181380    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:04.738393    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:04.761261    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:04.788560    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.788594    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:04.792550    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:04.822339    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.822339    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:04.826135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:04.854461    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.854531    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:04.858147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:04.886243    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.886243    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:04.890144    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:04.918123    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.918123    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:04.922152    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:04.949493    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.949557    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:04.953111    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:04.980390    1528 logs.go:282] 0 containers: []
	W1212 20:09:04.980390    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:04.980390    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:04.980390    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.043888    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.043888    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.075474    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.075474    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.156773    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:05.145508   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.147081   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.148087   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.150297   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:05.151035   27474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.156773    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:05.156773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:05.198847    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.198847    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:07.752600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.774442    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:07.801273    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.801315    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:07.804806    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:07.833315    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.833315    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:07.837119    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:07.866393    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.866417    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:07.869980    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:07.898480    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.898480    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:07.902426    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:07.929231    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.929231    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:07.932443    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:07.962786    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.962786    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:07.966343    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:07.993681    1528 logs.go:282] 0 containers: []
	W1212 20:09:07.993681    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:07.993681    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:07.993681    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.075996    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:08.065379   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.066297   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.068979   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.070479   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:08.071802   27621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.075996    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:08.075996    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:08.115751    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:08.115751    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:08.167959    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:08.167959    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:08.229990    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:08.229990    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:10.765802    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:10.787970    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:10.817520    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.817520    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:10.821188    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:10.850905    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.850905    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:10.854741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:10.882098    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.882098    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:10.885759    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:10.915908    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.915931    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:10.919484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:10.947704    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.947704    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:10.951840    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:10.979998    1528 logs.go:282] 0 containers: []
	W1212 20:09:10.979998    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:10.983440    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:11.012620    1528 logs.go:282] 0 containers: []
	W1212 20:09:11.012620    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:11.012620    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:11.012620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:11.075910    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:11.075910    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.105013    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:11.105013    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:11.184242    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:11.174258   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.175183   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.177449   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.178930   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:11.180215   27780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:11.184242    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:11.184242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:11.228072    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:11.228072    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:13.782352    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.806071    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:13.835380    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.835380    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:13.839913    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:13.866644    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.866644    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:13.870648    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:13.900617    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.900687    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:13.904431    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:13.928026    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.928026    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:13.931830    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:13.961813    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.961813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:13.965790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:13.993658    1528 logs.go:282] 0 containers: []
	W1212 20:09:13.993658    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:13.997303    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:14.025708    1528 logs.go:282] 0 containers: []
	W1212 20:09:14.025708    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:14.025708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:14.025708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:14.106478    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:14.097472   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.098766   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.100198   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.101259   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:14.102326   27922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:14.106478    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:14.106478    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:14.148128    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:14.148128    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:14.203808    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:14.203885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:14.267083    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:14.267083    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:16.803844    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:16.828076    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:16.857370    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.857370    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:16.861602    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:16.888928    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.888928    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.892594    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:16.918950    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.918950    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.922184    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:16.949697    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.949697    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:16.953615    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:16.980582    1528 logs.go:282] 0 containers: []
	W1212 20:09:16.980582    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.984239    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:17.011537    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.011537    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:17.015236    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:17.044025    1528 logs.go:282] 0 containers: []
	W1212 20:09:17.044025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.044059    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.044059    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.108593    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.108593    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.140984    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.140984    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:17.223600    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:17.212237   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.213454   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.216019   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.218047   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:17.219305   28078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:17.223647    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:17.223647    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:17.265808    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.265808    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:19.827665    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:19.848754    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:19.880440    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.880440    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:19.884631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:19.911688    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.911688    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:19.915503    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:19.942894    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.942894    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:19.946623    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:19.974622    1528 logs.go:282] 0 containers: []
	W1212 20:09:19.974622    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:19.978983    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:20.005201    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.005201    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:20.009244    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:20.040298    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.040298    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:20.043935    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:20.073267    1528 logs.go:282] 0 containers: []
	W1212 20:09:20.073267    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:20.073267    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:20.073267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:20.139351    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:20.139351    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:20.170692    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:20.170692    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:20.255758    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:20.244828   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.245691   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.248701   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.249629   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:20.251916   28227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:20.255758    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:20.255758    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:20.296082    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:20.296082    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:22.852656    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:22.877113    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:22.907531    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.907601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:22.911006    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:22.938103    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.938103    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:22.941741    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:22.969757    1528 logs.go:282] 0 containers: []
	W1212 20:09:22.969757    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:22.973641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:23.003718    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.003718    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:23.007427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:23.034105    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.034105    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:23.038551    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:23.068440    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.068440    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:23.072250    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:23.099797    1528 logs.go:282] 0 containers: []
	W1212 20:09:23.099797    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:23.099797    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:23.099797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:23.127441    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:23.127441    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:23.213420    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:23.200013   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.205001   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207092   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.207990   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:23.210181   28374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:23.213420    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:23.213420    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:23.258155    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:23.258155    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:23.304413    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:23.304413    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:25.871188    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:25.894216    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:25.924994    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.924994    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:25.928893    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:25.956143    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.956143    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:25.961174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:25.988898    1528 logs.go:282] 0 containers: []
	W1212 20:09:25.988898    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:25.993364    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:26.021169    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.021233    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:26.024829    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:26.051922    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.051922    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:26.055062    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:26.082542    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.082542    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:26.086788    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:26.117355    1528 logs.go:282] 0 containers: []
	W1212 20:09:26.117355    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:26.117355    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:26.117355    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:26.180352    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:26.180352    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:26.211105    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:26.211105    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:26.296971    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:26.286964   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.287943   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.291491   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.292547   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:26.294617   28526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:26.296971    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:26.296971    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:26.338711    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:26.338711    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:28.896860    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:28.920643    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:28.950389    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.950389    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:28.955391    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:28.982117    1528 logs.go:282] 0 containers: []
	W1212 20:09:28.982117    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:28.986142    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:29.015662    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.015662    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:29.019455    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:29.049660    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.049660    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:29.053631    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:29.081889    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.081889    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:29.086411    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:29.114138    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.114138    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:29.119659    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:29.150078    1528 logs.go:282] 0 containers: []
	W1212 20:09:29.150078    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:29.150078    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:29.150078    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:29.214085    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:29.214085    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:29.248111    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:29.248111    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:29.331531    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:29.323301   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.324246   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.326232   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.327289   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:29.328469   28672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:29.331531    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:29.331573    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:29.371475    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:29.371475    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:31.925581    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:31.948416    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:31.979393    1528 logs.go:282] 0 containers: []
	W1212 20:09:31.979436    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:31.982941    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:32.012671    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.012745    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:32.016490    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:32.044571    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.044571    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:32.049959    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:32.077737    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.077737    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:32.082023    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:32.112680    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.112680    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:32.116732    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:32.144079    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.144079    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:32.147365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:32.175674    1528 logs.go:282] 0 containers: []
	W1212 20:09:32.175674    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:32.175674    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:32.175674    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:32.238433    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:32.238433    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:32.268680    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:32.268680    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:32.350924    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:32.339559   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.340729   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.342082   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.343375   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:32.344861   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:32.351446    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:32.351446    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:32.393409    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:32.393409    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:34.949675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:34.974371    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:35.003673    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.003673    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:35.007894    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:35.036794    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.036794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:35.040718    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:35.068827    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.068827    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:35.073552    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:35.101505    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.101505    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:35.105374    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:35.132637    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.132637    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:35.135977    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:35.164108    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.164108    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:35.168327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:35.196237    1528 logs.go:282] 0 containers: []
	W1212 20:09:35.196237    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:35.196237    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:35.196237    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:35.225096    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:35.225096    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:35.310720    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:35.296566   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.297390   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.300154   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302418   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:35.302966   28973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:35.310720    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:35.310720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:35.352640    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:35.352640    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:35.405163    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:35.405684    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:37.970126    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:37.993740    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:38.021567    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.021567    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:38.025733    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:38.054259    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.054259    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:38.058230    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:38.091609    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.091609    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:38.094726    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:38.121402    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.121402    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:38.124780    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:38.156230    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.156230    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:38.159968    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:38.187111    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.187111    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:38.191000    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:38.219114    1528 logs.go:282] 0 containers: []
	W1212 20:09:38.219114    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:38.219114    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:38.219163    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:38.267592    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:38.267642    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:38.332291    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:38.332291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:38.362654    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:38.362654    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:38.450249    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:38.438367   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.439369   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.441361   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.442677   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:38.443575   29138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:38.450249    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:38.450249    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.000122    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:41.025061    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:41.056453    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.056453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:41.060356    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:41.090046    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.090046    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:41.096769    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:41.124375    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.124375    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:41.128276    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:41.155835    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.155835    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:41.159800    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:41.188748    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.188748    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:41.193110    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:41.220152    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.220152    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:41.224010    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:41.252532    1528 logs.go:282] 0 containers: []
	W1212 20:09:41.252532    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:41.252532    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:41.252532    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:41.316983    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:41.316983    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:41.347558    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:41.347558    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:41.428225    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:41.416671   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.417861   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.419148   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420206   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:41.420911   29277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:41.428225    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:41.428225    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:41.470919    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:41.470919    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:44.030446    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:44.055047    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:44.084459    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.084459    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:44.088206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:44.117052    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.117052    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:44.120537    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:44.147556    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.147556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:44.152098    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:44.180075    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.180075    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:44.183790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:44.210767    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.210767    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:44.214367    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:44.240217    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.240217    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:44.244696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:44.273318    1528 logs.go:282] 0 containers: []
	W1212 20:09:44.273318    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:44.273318    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:44.273371    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:44.339517    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:44.339517    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:44.369771    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:44.369771    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:44.450064    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:44.438924   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.439839   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.441693   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.442539   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:44.445500   29423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:44.450064    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:44.450064    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:44.493504    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:44.493504    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:47.062950    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:47.087994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:47.118381    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.118409    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:47.121556    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:47.150429    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.150429    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:47.154790    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:47.182604    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.182604    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:47.186262    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:47.213354    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.213354    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:47.217174    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:47.246442    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.246442    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:47.251292    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:47.280336    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.280336    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:47.283865    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:47.311245    1528 logs.go:282] 0 containers: []
	W1212 20:09:47.311323    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:47.311323    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:47.311323    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:47.374063    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:47.374063    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:47.404257    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:47.404257    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:47.493784    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:47.483027   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.484386   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.488247   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.489212   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:47.490430   29572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:47.493784    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:47.493784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:47.546267    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:47.546267    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:50.104321    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:50.126581    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:50.155564    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.155564    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:50.160428    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:50.189268    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.189268    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:50.192916    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:50.218955    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.218955    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:50.222686    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:50.249342    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.249342    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:50.253397    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:50.283028    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.283028    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:50.286951    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:50.325979    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.325979    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:50.329622    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:50.358362    1528 logs.go:282] 0 containers: []
	W1212 20:09:50.358362    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:50.358362    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:50.358362    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:50.422488    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:50.422488    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:50.452652    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:50.452652    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:50.550551    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:50.541153   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.542372   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.543369   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.544652   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:50.545772   29735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:50.550602    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:50.550602    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:50.590552    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:50.590552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.158722    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:53.182259    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:53.211903    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.211903    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:53.215402    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:53.243958    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.243958    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:53.247562    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:53.275751    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.275751    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:53.279763    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:53.306836    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.306836    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:53.310872    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:53.337813    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.337813    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:53.341633    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:53.371291    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.371291    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:53.374974    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:53.401726    1528 logs.go:282] 0 containers: []
	W1212 20:09:53.401726    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:53.401726    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:53.401726    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:53.484480    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:53.475528   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.476720   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.477955   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.479240   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:53.480197   29878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:53.484480    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:53.484480    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:53.548050    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:53.548050    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:53.599287    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:53.599439    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:53.660624    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:53.660624    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.196823    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:56.221135    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:56.250407    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.250407    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:56.254016    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:56.285901    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.285901    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:56.290067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:56.318341    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.318341    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:56.321789    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:56.352739    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.352739    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:56.356470    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:56.384106    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.384106    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:56.388211    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:56.415890    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.415890    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:56.420087    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:56.447932    1528 logs.go:282] 0 containers: []
	W1212 20:09:56.447932    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:56.447932    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:56.447932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:56.477708    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:56.477708    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:56.588387    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:56.579332   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.580404   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.581153   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.583503   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:56.584370   30049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:56.588387    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:56.588387    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:56.628140    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:56.629024    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:56.673720    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:56.673720    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.242052    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:59.264739    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:09:59.293601    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.293601    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:09:59.297772    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:09:59.324701    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.324701    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:09:59.328642    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:09:59.358373    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.358373    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:09:59.362425    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:09:59.392638    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.392638    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:09:59.396206    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:09:59.423777    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.423777    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:59.427998    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:09:59.455368    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.455368    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:09:59.460647    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:09:59.488029    1528 logs.go:282] 0 containers: []
	W1212 20:09:59.488029    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:59.488029    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:59.488029    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:59.548806    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:59.548806    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:59.580620    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:59.580620    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:59.670291    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:09:59.659775   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.660719   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.663685   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.664723   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:59.665878   30202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:59.670291    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:09:59.670291    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:09:59.715000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:09:59.715000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:02.271675    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:02.295613    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:02.328792    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.328792    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:02.332483    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:02.364136    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.364136    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:02.368415    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:02.396018    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.396018    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:02.399987    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:02.426946    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.426946    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:02.430641    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:02.457307    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.457307    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:02.461639    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:02.490776    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.490776    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:02.495011    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:02.535030    1528 logs.go:282] 0 containers: []
	W1212 20:10:02.535030    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:02.535030    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:02.535030    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:02.598020    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:02.598020    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:02.627885    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:02.627885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:02.704890    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:02.692184   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.693898   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.695260   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.696802   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:02.698053   30352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:02.704939    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:02.704939    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:02.743781    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:02.743781    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.296529    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:05.320338    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:05.350975    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.350975    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:05.354341    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:05.384954    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.384954    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:05.389226    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:05.416593    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.416663    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:05.420370    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:05.448275    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.448306    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:05.451950    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:05.489214    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.489214    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:05.492826    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:05.542815    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.542815    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:05.546994    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:05.577967    1528 logs.go:282] 0 containers: []
	W1212 20:10:05.577967    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:05.577967    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:05.577967    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:05.666752    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:05.655586   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.656570   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.657861   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.659073   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:05.660173   30499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:05.666752    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:05.666752    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:05.710699    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:05.710699    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:05.761552    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:05.761552    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:05.824698    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:05.824698    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.358868    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:08.384185    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:08.414077    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.414077    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:08.417802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:08.449585    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.449585    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:08.453707    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:08.481690    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.481690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:08.485802    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:08.526849    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.526849    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:08.530588    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:08.561211    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.561211    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:08.565127    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:08.592694    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.592781    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:08.596577    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:08.625262    1528 logs.go:282] 0 containers: []
	W1212 20:10:08.625262    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:08.625262    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:08.625335    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:08.685169    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:08.685169    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:08.715897    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:08.715897    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:08.803701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:08.791784   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.793050   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.794221   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.795525   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:08.797359   30653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:08.803701    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:08.803701    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:08.843054    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:08.843054    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:11.399600    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:11.423207    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:11.452824    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.452824    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:11.456632    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:11.485718    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.485718    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:11.489975    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:11.516373    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.516442    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:11.520086    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:11.550008    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.550008    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:11.553479    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:11.582422    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.582422    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:11.586067    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:11.614204    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.614204    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:11.617891    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:11.647117    1528 logs.go:282] 0 containers: []
	W1212 20:10:11.647117    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:11.647117    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:11.647117    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:11.708885    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:11.708885    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:11.738490    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:11.738490    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:11.827046    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:11.816517   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.817635   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.818643   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.819970   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:11.821231   30804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:11.827046    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:11.827107    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:11.866493    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:11.866493    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.418219    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:14.441326    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:14.471617    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.471617    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:14.475764    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:14.525977    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.525977    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:14.530095    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:14.559065    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.559065    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:14.562300    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:14.591222    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.591222    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:14.595004    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:14.623409    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.623409    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:14.626892    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:14.654709    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.654709    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:14.658517    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:14.685033    1528 logs.go:282] 0 containers: []
	W1212 20:10:14.685033    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:14.685033    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:14.685033    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:14.729797    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:14.729797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:14.775571    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:14.775571    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:14.837326    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:14.837326    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:14.868773    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:14.868773    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:14.947701    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:14.936523   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.939018   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.940394   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.941385   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:14.944155   30974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.453450    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:17.476221    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:17.508293    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.508388    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:17.512181    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:17.543844    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.543844    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:17.547662    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:17.575201    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.575201    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:17.578822    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:17.606210    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.606210    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:17.609909    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:17.635671    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.635671    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:17.639317    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:17.668567    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.668567    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:17.671701    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:17.698754    1528 logs.go:282] 0 containers: []
	W1212 20:10:17.698754    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:17.698754    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:17.698835    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:17.746368    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:17.746368    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:17.807375    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:17.807375    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:17.838385    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:17.838385    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:17.926603    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:17.913747   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.914551   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.916660   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920134   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:17.920887   31123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:17.926603    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:17.926648    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.475641    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:20.498334    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:20.527197    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.527197    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:20.530922    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:20.557934    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.557934    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:20.561696    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:20.589458    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.589458    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:20.593618    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:20.618953    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.619013    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:20.622779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:20.650087    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.650087    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:20.653349    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:20.680898    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.680898    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:20.684841    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:20.711841    1528 logs.go:282] 0 containers: []
	W1212 20:10:20.711841    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:20.711841    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:20.711841    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:20.773325    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:20.773325    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:20.802932    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:20.802932    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:20.882468    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:20.873604   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.874733   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.875947   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.877488   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:20.878762   31260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:20.882468    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:20.882468    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:20.924918    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:20.924918    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:23.483925    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:23.503925    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:23.531502    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.531502    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:23.535209    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:23.566493    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.566493    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:23.569915    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:23.598869    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.598869    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:23.603128    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:23.629658    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.629658    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:23.633104    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:23.659718    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.659718    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:23.663327    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:23.693156    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.693156    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:23.696530    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:23.727025    1528 logs.go:282] 0 containers: []
	W1212 20:10:23.727025    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:23.727025    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:23.727025    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:23.788970    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:23.788970    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:23.819732    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:23.819732    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:23.903797    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:23.893226   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.894440   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.895734   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.897141   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:23.899253   31407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:23.903797    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:23.903797    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:23.943716    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:23.943716    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:26.496986    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:26.519387    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:26.546439    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.546439    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:26.550311    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:26.579658    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.579658    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:26.583767    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:26.611690    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.611690    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:26.616096    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:26.642773    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.642773    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:26.646291    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:26.674086    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.674086    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:26.677423    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:26.705896    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.705896    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:26.709747    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:26.736563    1528 logs.go:282] 0 containers: []
	W1212 20:10:26.736563    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:26.736563    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:26.736563    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:26.797921    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:26.797921    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:26.827915    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:26.827915    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:26.912180    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:26.902978   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.904059   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.905126   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.906124   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:26.907093   31556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:26.912180    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:26.912180    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:26.952784    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:26.952784    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.506291    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:29.528153    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:29.558126    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.558126    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:29.562358    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:29.592320    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.592320    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:29.596049    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:29.628556    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.628556    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:29.632809    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:29.657311    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.657311    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:29.661781    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:29.690232    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.690261    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:29.693735    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:29.722288    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.722288    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:29.725599    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:29.757022    1528 logs.go:282] 0 containers: []
	W1212 20:10:29.757022    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:29.757057    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:29.757057    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:29.838684    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:29.829268   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.831893   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.834799   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.835771   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:29.837035   31695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:29.838684    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:29.840075    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:29.881968    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:29.881968    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:29.937264    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:29.937264    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:30.003954    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:30.003954    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:32.543156    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:32.567379    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:32.595089    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.595089    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:32.599147    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:32.627893    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.627962    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:32.631484    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:32.658969    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.658969    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:32.662719    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:32.689837    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.689837    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:32.693526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:32.719931    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.719931    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:32.723427    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:32.754044    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.754044    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:32.757365    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:32.785242    1528 logs.go:282] 0 containers: []
	W1212 20:10:32.785242    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:32.785242    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:32.785242    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:32.866344    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:32.854769   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.856146   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.858548   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.859713   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:32.861545   31841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:32.866344    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:32.866344    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:32.910000    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:32.910000    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:32.959713    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:32.959713    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:33.023739    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:33.023739    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:35.563488    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:35.587848    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:35.619497    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.619497    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:35.625107    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:35.653936    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.653936    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:35.657619    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:35.684524    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.684524    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:35.687685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:35.718759    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.718759    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:35.722575    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:35.749655    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.749655    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:35.753297    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:35.780974    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.780974    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:35.784685    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:35.810182    1528 logs.go:282] 0 containers: []
	W1212 20:10:35.810182    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:35.810182    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:35.810182    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:35.892605    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:35.881515   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.884461   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.886394   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.887420   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:35.888495   31989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:35.892605    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:35.892605    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:35.932890    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:35.932890    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:35.985679    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:35.985679    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:36.046361    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:36.046361    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:38.583800    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:38.606814    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:38.638211    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.638211    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:38.642266    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:38.669848    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.669848    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:38.673886    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:38.700984    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.700984    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:38.705078    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:38.729910    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.729910    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:38.733986    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:38.760705    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.760705    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:38.765121    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:38.799915    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.799915    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:38.804009    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:38.833364    1528 logs.go:282] 0 containers: []
	W1212 20:10:38.833364    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:38.833364    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:38.833364    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:38.913728    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:38.904474   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.905728   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.906533   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.908851   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:38.910188   32140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:38.914694    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:38.914694    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:38.953812    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:38.953812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:38.999712    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:38.999712    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:39.060789    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:39.060789    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:41.597593    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:41.620430    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:41.650082    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.650082    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:41.653991    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:41.681237    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.681306    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:41.684963    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:41.713795    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.713795    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:41.719712    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:41.749037    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.749037    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:41.753070    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:41.779427    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.779427    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:41.783501    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:41.815751    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.815751    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:41.819560    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:41.847881    1528 logs.go:282] 0 containers: []
	W1212 20:10:41.847881    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:41.847881    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:41.847931    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:41.927320    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:41.917717   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.918601   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.921188   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.923369   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:41.924481   32287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:41.927320    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:41.927320    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:41.970940    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:41.970940    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:42.027555    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:42.027555    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:42.089451    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:42.089451    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.625751    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:44.648990    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:44.676551    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.676585    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:44.679722    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:44.709172    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.709172    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:44.713304    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:44.743046    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.743046    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:44.748526    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:44.778521    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.778521    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:44.782734    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:44.814603    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.814603    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:44.817683    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:44.845948    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.845948    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:44.849265    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:44.879812    1528 logs.go:282] 0 containers: []
	W1212 20:10:44.879812    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:44.879812    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:44.879812    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:44.944127    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:44.944127    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:44.974113    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:44.974113    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:45.057102    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:45.045651   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.047975   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.050361   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.051485   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:45.052325   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:45.057102    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:45.057102    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:45.100139    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:45.100139    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.652183    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:47.675849    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 20:10:47.706239    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.706239    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:10:47.709475    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 20:10:47.741233    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.741233    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:10:47.744861    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 20:10:47.774055    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.774055    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:10:47.777505    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 20:10:47.805794    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.805794    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:10:47.808964    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 20:10:47.836392    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.836392    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:10:47.841779    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 20:10:47.870715    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.870715    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:10:47.874288    1528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 20:10:47.901831    1528 logs.go:282] 0 containers: []
	W1212 20:10:47.901831    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:10:47.901831    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:10:47.901831    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 20:10:47.944346    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:10:47.944346    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:10:47.988778    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:10:47.988778    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:10:48.052537    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:10:48.052537    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:10:48.083339    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:10:48.083339    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:10:48.169498    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:10:48.160314   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.161274   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.162381   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.163778   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:48.164689   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:10:50.675888    1528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:10:50.695141    1528 kubeadm.go:602] duration metric: took 4m2.9691176s to restartPrimaryControlPlane
	W1212 20:10:50.695255    1528 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:10:50.699541    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:10:51.173784    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:51.196593    1528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:51.210961    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:51.215040    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:51.228862    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:51.228862    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:51.232787    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:10:51.246730    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:51.251357    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:51.268580    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:10:51.283713    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:51.288367    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:51.308779    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.322868    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:51.327510    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:51.347243    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:10:51.360015    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:51.365274    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:51.383196    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:51.503494    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:10:51.590365    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:10:51.685851    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:14:52.890657    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:14:52.890657    1528 kubeadm.go:319] 
	I1212 20:14:52.891189    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:14:52.897133    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:14:52.897133    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:14:52.897133    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:14:52.897133    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:14:52.898464    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:14:52.898582    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:14:52.898779    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:14:52.898920    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:14:52.899045    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:14:52.899131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:14:52.899262    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:14:52.899432    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:14:52.899517    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:14:52.899644    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:14:52.899729    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:14:52.899847    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:14:52.900038    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:14:52.900217    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:14:52.900390    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:14:52.900502    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:14:52.900574    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:14:52.900710    1528 kubeadm.go:319] OS: Linux
	I1212 20:14:52.900833    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:14:52.900915    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:14:52.900953    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:14:52.901708    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:14:52.901818    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:14:52.901818    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:14:52.906810    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:14:52.906810    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:14:52.907808    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:14:52.907808    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:14:52.908849    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:14:52.908909    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:14:52.908909    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:14:52.912070    1528 out.go:252]   - Booting up control plane ...
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:14:52.912070    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:14:52.913075    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:14:52.914083    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:14:52.914083    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000441542s
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:14:52.914083    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:14:52.914083    1528 kubeadm.go:319] 
	I1212 20:14:52.914083    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:14:52.915069    1528 kubeadm.go:319] 
	W1212 20:14:52.915069    1528 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000441542s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:14:52.921774    1528 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 20:14:53.390305    1528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:14:53.408818    1528 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:14:53.413243    1528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:14:53.425325    1528 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:14:53.425325    1528 kubeadm.go:158] found existing configuration files:
	
	I1212 20:14:53.430625    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:14:53.442895    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:14:53.446965    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:14:53.464658    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:14:53.478038    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:14:53.482805    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:14:53.499083    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.513919    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:14:53.518566    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:14:53.538555    1528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:14:53.552479    1528 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:14:53.557205    1528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:14:53.576642    1528 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:14:53.698383    1528 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 20:14:53.775189    1528 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:14:53.868267    1528 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:18:54.359522    1528 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:18:54.359522    1528 kubeadm.go:319] 
	I1212 20:18:54.359522    1528 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:18:54.362954    1528 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:18:54.363173    1528 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:18:54.363383    1528 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:18:54.363609    1528 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 20:18:54.363609    1528 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 20:18:54.364132    1528 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_INET: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 20:18:54.364423    1528 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 20:18:54.364950    1528 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 20:18:54.365131    1528 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 20:18:54.365662    1528 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 20:18:54.365743    1528 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 20:18:54.365828    1528 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 20:18:54.365917    1528 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 20:18:54.366005    1528 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 20:18:54.366087    1528 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 20:18:54.366168    1528 kubeadm.go:319] OS: Linux
	I1212 20:18:54.366224    1528 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:18:54.366255    1528 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:18:54.366308    1528 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:18:54.366823    1528 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:18:54.366960    1528 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:18:54.367127    1528 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:18:54.367127    1528 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:18:54.369422    1528 out.go:252]   - Generating certificates and keys ...
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:18:54.369422    1528 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:18:54.369953    1528 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:18:54.370159    1528 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:18:54.370228    1528 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:18:54.370309    1528 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:18:54.370471    1528 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:18:54.370639    1528 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:18:54.370717    1528 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:18:54.370717    1528 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:18:54.371251    1528 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:18:54.371313    1528 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:18:54.371344    1528 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:18:54.371344    1528 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:18:54.374291    1528 out.go:252]   - Booting up control plane ...
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:18:54.374291    1528 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:18:54.375259    1528 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:18:54.375259    1528 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000961807s
	I1212 20:18:54.375259    1528 kubeadm.go:319] 
	I1212 20:18:54.376246    1528 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:18:54.376246    1528 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:18:54.376405    1528 kubeadm.go:319] 
	I1212 20:18:54.376405    1528 kubeadm.go:403] duration metric: took 12m6.6943451s to StartCluster
	I1212 20:18:54.376405    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:18:54.380250    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:18:54.441453    1528 cri.go:89] found id: ""
	I1212 20:18:54.441453    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.441453    1528 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:18:54.441453    1528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:18:54.446414    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:18:54.508794    1528 cri.go:89] found id: ""
	I1212 20:18:54.508794    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.508794    1528 logs.go:284] No container was found matching "etcd"
	I1212 20:18:54.508794    1528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:18:54.513698    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:18:54.553213    1528 cri.go:89] found id: ""
	I1212 20:18:54.553257    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.553257    1528 logs.go:284] No container was found matching "coredns"
	I1212 20:18:54.553295    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:18:54.558235    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:18:54.603262    1528 cri.go:89] found id: ""
	I1212 20:18:54.603262    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.603262    1528 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:18:54.603262    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:18:54.608185    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:18:54.648151    1528 cri.go:89] found id: ""
	I1212 20:18:54.648151    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.648151    1528 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:18:54.648151    1528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:18:54.652647    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:18:54.693419    1528 cri.go:89] found id: ""
	I1212 20:18:54.693419    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.693419    1528 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:18:54.693419    1528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:18:54.697661    1528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:18:54.737800    1528 cri.go:89] found id: ""
	I1212 20:18:54.737800    1528 logs.go:282] 0 containers: []
	W1212 20:18:54.737800    1528 logs.go:284] No container was found matching "kindnet"
	I1212 20:18:54.737858    1528 logs.go:123] Gathering logs for container status ...
	I1212 20:18:54.737858    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:18:54.790460    1528 logs.go:123] Gathering logs for kubelet ...
	I1212 20:18:54.790460    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:18:54.852887    1528 logs.go:123] Gathering logs for dmesg ...
	I1212 20:18:54.852887    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:18:54.883744    1528 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:18:54.883744    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:18:54.965870    1528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:18:54.956009   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.957362   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.959496   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.962003   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:18:54.963316   40640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:18:54.965870    1528 logs.go:123] Gathering logs for Docker ...
	I1212 20:18:54.965870    1528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 20:18:55.009075    1528 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.009075    1528 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.009075    1528 out.go:285] * 
	W1212 20:18:55.011173    1528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:18:55.016858    1528 out.go:203] 
	W1212 20:18:55.021226    1528 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000961807s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:18:55.021226    1528 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:18:55.021226    1528 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:18:55.024694    1528 out.go:203] 
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:21:32.534157   44779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:21:32.535472   44779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:21:32.536481   44779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:21:32.537440   44779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:21:32.538820   44779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:21:32 up  1:23,  0 user,  load average: 0.60, 0.41, 0.45
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:21:29 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 527.
	Dec 12 20:21:30 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:30 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:30 functional-468800 kubelet[44614]: E1212 20:21:30.185658   44614 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 528.
	Dec 12 20:21:30 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:30 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:30 functional-468800 kubelet[44626]: E1212 20:21:30.938263   44626 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:21:30 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:21:31 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 529.
	Dec 12 20:21:31 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:31 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:31 functional-468800 kubelet[44653]: E1212 20:21:31.691140   44653 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:21:31 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:21:31 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:21:32 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 530.
	Dec 12 20:21:32 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:32 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:21:32 functional-468800 kubelet[44742]: E1212 20:21:32.438391   44742 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:21:32 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:21:32 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (583.1221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-468800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-468800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.3373328s)

                                                
                                                
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-468800 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1212 20:21:44.422055   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:21:54.510714   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:04.549330   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:14.590868   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	E1212 20:22:24.628457   13860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55778/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-468800
helpers_test.go:244: (dbg) docker inspect functional-468800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356",
	        "Created": "2025-12-12T19:49:05.216637357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:49:05.485304524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/hosts",
	        "LogPath": "/var/lib/docker/containers/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356/0d3494c9d93e198c13f868212572545c1f07cb8e5eb475578c1ce4ca00c15356-json.log",
	        "Name": "/functional-468800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-468800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-468800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e4cdae7e4b37ecb86097307546073273de36e954a273bf718938ac1d7d6d06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-468800",
	                "Source": "/var/lib/docker/volumes/functional-468800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-468800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-468800",
	                "name.minikube.sigs.k8s.io": "functional-468800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c576a1bc459e3ecef05356a56ff79fb60b3540cf362d7af9e4f9ccd4a4dded4",
	            "SandboxKey": "/var/run/docker/netns/1c576a1bc459",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55777"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-468800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0db2e63885ea6d1e4cf32e8285a0f1e1fe10bae67ef9ab957c6270bdb8c136b",
	                    "EndpointID": "d5e453160824066e5fee477ceb2ca5411fd18d149268eed593084ba8a0cf13dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-468800",
	                        "0d3494c9d93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-468800 -n functional-468800: exit status 2 (574.8396ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs -n 25: (1.0378606s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-468800 ssh sudo cat /usr/share/ca-certificates/133962.pem                                                                                      │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-468800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ docker-env     │ functional-468800 docker-env                                                                                                                              │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-468800 ssh sudo cat /etc/test/nested/copy/13396/hosts                                                                                          │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image save kicbase/echo-server:functional-468800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image rm kicbase/echo-server:functional-468800 --alsologtostderr                                                                        │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image ls                                                                                                                                │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ image          │ functional-468800 image save --daemon kicbase/echo-server:functional-468800 --alsologtostderr                                                             │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:21 UTC │
	│ start          │ -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start          │ -p functional-468800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start          │ -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-468800 --alsologtostderr -v=1                                                                                            │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ update-context │ functional-468800 update-context --alsologtostderr -v=2                                                                                                   │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:22 UTC │
	│ image          │ functional-468800 image ls --format short --alsologtostderr                                                                                               │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ ssh            │ functional-468800 ssh pgrep buildkitd                                                                                                                     │ functional-468800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:22:23
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:22:23.221121    3452 out.go:360] Setting OutFile to fd 1996 ...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.279090    3452 out.go:374] Setting ErrFile to fd 1012...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.304610    3452 out.go:368] Setting JSON to false
	I1212 20:22:23.307604    3452 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5081,"bootTime":1765565862,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:22:23.307604    3452 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:22:23.311607    3452 out.go:179] * [functional-468800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:22:23.312604    3452 notify.go:221] Checking for updates...
	I1212 20:22:23.315613    3452 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:22:23.317610    3452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:22:23.319615    3452 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:22:23.322604    3452 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:22:23.324597    3452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:22:23.190445   11460 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:23.191446   11460 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:23.314613   11460 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:23.318612   11460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.584928   11460 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.558479431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.588921   11460 out.go:179] * Using the docker driver based on existing profile
	I1212 20:22:23.326596    3452 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:23.327598    3452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:23.481603    3452 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:23.484608    3452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.737731    3452 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.719883179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.742724    3452 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:22:23.744723    3452 start.go:309] selected driver: docker
	I1212 20:22:23.744723    3452 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.744723    3452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:23.781732    3452 out.go:203] 
	W1212 20:22:23.784721    3452 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:22:23.786731    3452 out.go:203] 
	I1212 20:22:23.590929   11460 start.go:309] selected driver: docker
	I1212 20:22:23.590929   11460 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.590929   11460 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:23.598921   11460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.834212   11460 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.818279187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.869216   11460 cni.go:84] Creating CNI manager for ""
	I1212 20:22:23.869216   11460 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 20:22:23.869216   11460 start.go:353] cluster config:
	{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.874216   11460 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259960912Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.259968912Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260013916Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.260053720Z" level=info msg="Initializing buildkit"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.356181012Z" level=info msg="Completed buildkit initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364821976Z" level=info msg="Daemon has completed initialization"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.364991591Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365009292Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 20:06:44 functional-468800 dockerd[21655]: time="2025-12-12T20:06:44.365041495Z" level=info msg="API listen on [::]:2376"
	Dec 12 20:06:44 functional-468800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:44 functional-468800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 12 20:06:44 functional-468800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 12 20:06:45 functional-468800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 20:06:45 functional-468800 cri-dockerd[21969]: time="2025-12-12T20:06:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 20:06:45 functional-468800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:22:26.192516   46207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:26.194503   46207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:26.195465   46207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:26.196522   46207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:22:26.197857   46207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000963] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001270] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000859] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 20:06] CPU: 0 PID: 65148 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000914] RIP: 0033:0x7f777423db20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7f777423daf6.
	[  +0.000639] RSP: 002b:00007ffce8bd2080 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000797] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000998] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001310] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001295] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.002739] FS:  0000000000000000 GS:  0000000000000000
	[  +0.817521] CPU: 11 PID: 65286 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000931] RIP: 0033:0x7f6f12da3b20
	[  +0.000392] Code: Unable to access opcode bytes at RIP 0x7f6f12da3af6.
	[  +0.000657] RSP: 002b:00007ffe2eabe1e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000812] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000817] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000776] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000784] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000994] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001070] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 20:22:26 up  1:24,  0 user,  load average: 0.61, 0.45, 0.46
	Linux functional-468800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:22:22 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:23 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 598.
	Dec 12 20:22:23 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:23 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:23 functional-468800 kubelet[45895]: E1212 20:22:23.472216   45895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:23 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:23 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 599.
	Dec 12 20:22:24 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:24 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:24 functional-468800 kubelet[45908]: E1212 20:22:24.200329   45908 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 600.
	Dec 12 20:22:24 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:24 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:24 functional-468800 kubelet[46001]: E1212 20:22:24.947333   46001 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:24 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:22:25 functional-468800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 601.
	Dec 12 20:22:25 functional-468800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:25 functional-468800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:22:25 functional-468800 kubelet[46074]: E1212 20:22:25.701003   46074 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:22:25 functional-468800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:22:25 functional-468800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-468800 -n functional-468800: exit status 2 (593.9985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-468800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-468800 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-468800 create deployment hello-node --image kicbase/echo-server: exit status 1 (99.1954ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:55778/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-468800 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 service list: exit status 103 (474.3877ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-468800 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-468800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-468800\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 service list -o json: exit status 103 (486.4639ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-468800 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 service --namespace=default --https --url hello-node: exit status 103 (518.2381ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-468800 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url --format={{.IP}}: exit status 103 (500.0117ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-468800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-468800\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1212 20:20:17.678366    4488 out.go:360] Setting OutFile to fd 1708 ...
I1212 20:20:17.757377    4488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:17.757377    4488 out.go:374] Setting ErrFile to fd 1992...
I1212 20:20:17.757377    4488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:17.769366    4488 mustload.go:66] Loading cluster: functional-468800
I1212 20:20:17.770361    4488 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:20:17.780364    4488 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:20:17.834351    4488 host.go:66] Checking if "functional-468800" exists ...
I1212 20:20:17.839371    4488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-468800
I1212 20:20:17.890361    4488 api_server.go:166] Checking apiserver status ...
I1212 20:20:17.895357    4488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 20:20:17.900356    4488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:20:17.958362    4488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
W1212 20:20:18.098066    4488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 20:20:18.101994    4488 out.go:179] * The control-plane node functional-468800 apiserver is not running: (state=Stopped)
I1212 20:20:18.105032    4488 out.go:179]   To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
stdout: * The control-plane node functional-468800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-468800"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 2324: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] stdout:
* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-468800"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url: exit status 103 (476.7746ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-468800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-468800"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-468800 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-468800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-468800"
functional_test.go:1579: failed to parse "* The control-plane node functional-468800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-468800\"": parse "* The control-plane node functional-468800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-468800\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-468800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-468800 apply -f testdata\testsvc.yaml: exit status 1 (20.1798593s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:55778/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-468800 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-468800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-468800"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-468800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-468800": exit status 1 (2.8411175s)

                                                
                                                
-- stdout --
	functional-468800
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (844.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-716700 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-716700 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (52.5336414s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-716700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-716700: (12.7172542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-716700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-716700 status --format={{.Host}}: exit status 7 (273.8061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-716700 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker
E1212 21:07:31.904416   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-716700 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m39.5994507s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-716700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-716700" primary control-plane node in "kubernetes-upgrade-716700" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:07:06.197444    3672 out.go:360] Setting OutFile to fd 1212 ...
	I1212 21:07:06.259434    3672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:07:06.259434    3672 out.go:374] Setting ErrFile to fd 1740...
	I1212 21:07:06.259434    3672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:07:06.280438    3672 out.go:368] Setting JSON to false
	I1212 21:07:06.285431    3672 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7764,"bootTime":1765565862,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:07:06.285431    3672 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:07:06.289435    3672 out.go:179] * [kubernetes-upgrade-716700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:07:06.293427    3672 notify.go:221] Checking for updates...
	I1212 21:07:06.295429    3672 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:07:06.299429    3672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:07:06.302428    3672 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:07:06.305457    3672 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:07:06.307440    3672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:07:06.310422    3672 config.go:182] Loaded profile config "kubernetes-upgrade-716700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1212 21:07:06.311425    3672 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:07:06.447431    3672 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:07:06.452427    3672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:07:06.906705    3672 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:104 SystemTime:2025-12-12 21:07:06.885451435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:07:06.912578    3672 out.go:179] * Using the docker driver based on existing profile
	I1212 21:07:06.915585    3672 start.go:309] selected driver: docker
	I1212 21:07:06.915585    3672 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-716700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-716700 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:07:06.916171    3672 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:07:06.960062    3672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:07:07.214118    3672 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:104 SystemTime:2025-12-12 21:07:07.194556049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:07:07.215119    3672 cni.go:84] Creating CNI manager for ""
	I1212 21:07:07.215119    3672 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:07:07.215119    3672 start.go:353] cluster config:
	{Name:kubernetes-upgrade-716700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-716700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:07:07.219121    3672 out.go:179] * Starting "kubernetes-upgrade-716700" primary control-plane node in "kubernetes-upgrade-716700" cluster
	I1212 21:07:07.221118    3672 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:07:07.223113    3672 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:07:07.226113    3672 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:07:07.226113    3672 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:07:07.226113    3672 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:07:07.226113    3672 cache.go:65] Caching tarball of preloaded images
	I1212 21:07:07.226113    3672 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:07:07.227116    3672 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:07:07.227116    3672 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\config.json ...
	I1212 21:07:07.318114    3672 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:07:07.318114    3672 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:07:07.318114    3672 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:07:07.318114    3672 start.go:360] acquireMachinesLock for kubernetes-upgrade-716700: {Name:mkd9043586f691cbe14eb864323e2025d1c80eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:07:07.318114    3672 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubernetes-upgrade-716700"
	I1212 21:07:07.318114    3672 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:07:07.318114    3672 fix.go:54] fixHost starting: 
	I1212 21:07:07.328120    3672 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-716700 --format={{.State.Status}}
	I1212 21:07:07.393113    3672 fix.go:112] recreateIfNeeded on kubernetes-upgrade-716700: state=Stopped err=<nil>
	W1212 21:07:07.393113    3672 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:07:07.399109    3672 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-716700" ...
	I1212 21:07:07.404117    3672 cli_runner.go:164] Run: docker start kubernetes-upgrade-716700
	I1212 21:07:07.948466    3672 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-716700 --format={{.State.Status}}
	I1212 21:07:08.005465    3672 kic.go:430] container "kubernetes-upgrade-716700" state is running.
	I1212 21:07:08.013470    3672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-716700
	I1212 21:07:08.071477    3672 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\config.json ...
	I1212 21:07:08.073478    3672 machine.go:94] provisionDockerMachine start ...
	I1212 21:07:08.077475    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:08.133251    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:08.134253    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:08.134253    3672 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:07:08.136256    3672 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:07:11.319059    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-716700
	
	I1212 21:07:11.319059    3672 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-716700"
	I1212 21:07:11.325083    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:11.383250    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:11.383250    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:11.383250    3672 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-716700 && echo "kubernetes-upgrade-716700" | sudo tee /etc/hostname
	I1212 21:07:11.571631    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-716700
	
	I1212 21:07:11.575586    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:11.630961    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:11.631879    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:11.631949    3672 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-716700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-716700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-716700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:07:11.797763    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:07:11.797763    3672 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:07:11.797763    3672 ubuntu.go:190] setting up certificates
	I1212 21:07:11.797763    3672 provision.go:84] configureAuth start
	I1212 21:07:11.802123    3672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-716700
	I1212 21:07:11.855273    3672 provision.go:143] copyHostCerts
	I1212 21:07:11.855475    3672 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:07:11.855475    3672 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:07:11.855475    3672 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:07:11.857436    3672 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:07:11.857475    3672 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:07:11.857475    3672 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:07:11.858694    3672 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:07:11.858844    3672 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:07:11.858956    3672 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:07:11.858956    3672 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-716700 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-716700 localhost minikube]
	I1212 21:07:11.989406    3672 provision.go:177] copyRemoteCerts
	I1212 21:07:11.993383    3672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:07:11.997659    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:12.054116    3672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60365 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-716700\id_rsa Username:docker}
	I1212 21:07:12.193295    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:07:12.222192    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:07:12.247481    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:07:12.277408    3672 provision.go:87] duration metric: took 479.5817ms to configureAuth
	I1212 21:07:12.277452    3672 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:07:12.277886    3672 config.go:182] Loaded profile config "kubernetes-upgrade-716700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:07:12.281923    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:12.343261    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:12.343660    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:12.343660    3672 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:07:12.528752    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:07:12.528853    3672 ubuntu.go:71] root file system type: overlay
	I1212 21:07:12.529153    3672 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:07:12.533079    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:12.592611    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:12.592611    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:12.592611    3672 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:07:12.781872    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:07:12.785880    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:12.841874    3672 main.go:143] libmachine: Using SSH client type: native
	I1212 21:07:12.841874    3672 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 60365 <nil> <nil>}
	I1212 21:07:12.841874    3672 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:07:13.021705    3672 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:07:13.021705    3672 machine.go:97] duration metric: took 4.9481513s to provisionDockerMachine
	I1212 21:07:13.021705    3672 start.go:293] postStartSetup for "kubernetes-upgrade-716700" (driver="docker")
	I1212 21:07:13.021705    3672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:07:13.025704    3672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:07:13.028699    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:13.079693    3672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60365 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-716700\id_rsa Username:docker}
	I1212 21:07:13.219012    3672 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:07:13.225626    3672 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:07:13.225626    3672 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:07:13.225626    3672 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:07:13.226306    3672 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:07:13.226815    3672 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:07:13.231203    3672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:07:13.243219    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:07:13.273230    3672 start.go:296] duration metric: took 251.5206ms for postStartSetup
	I1212 21:07:13.278980    3672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:07:13.283190    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:13.350245    3672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60365 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-716700\id_rsa Username:docker}
	I1212 21:07:13.478180    3672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:07:13.486317    3672 fix.go:56] duration metric: took 6.1681075s for fixHost
	I1212 21:07:13.486317    3672 start.go:83] releasing machines lock for "kubernetes-upgrade-716700", held for 6.1681075s
	I1212 21:07:13.492322    3672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-716700
	I1212 21:07:13.545319    3672 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:07:13.549313    3672 ssh_runner.go:195] Run: cat /version.json
	I1212 21:07:13.549313    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:13.552324    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:13.603318    3672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60365 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-716700\id_rsa Username:docker}
	I1212 21:07:13.603318    3672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60365 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-716700\id_rsa Username:docker}
	W1212 21:07:13.725156    3672 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:07:13.729154    3672 ssh_runner.go:195] Run: systemctl --version
	I1212 21:07:13.743155    3672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:07:13.752174    3672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:07:13.757152    3672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:07:13.770159    3672 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:07:13.770159    3672 start.go:496] detecting cgroup driver to use...
	I1212 21:07:13.770159    3672 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:07:13.770159    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:13.795154    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:07:13.813170    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:07:13.828157    3672 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:07:13.832160    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1212 21:07:13.833153    3672 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:07:13.833153    3672 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:07:13.850154    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:07:13.867150    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:07:13.885150    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:07:13.904617    3672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:07:13.920978    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:07:13.941228    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:07:13.959339    3672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:07:13.976343    3672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:07:13.993338    3672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:07:14.009331    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:14.162783    3672 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:07:14.363210    3672 start.go:496] detecting cgroup driver to use...
	I1212 21:07:14.363210    3672 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:07:14.368197    3672 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:07:14.394571    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:07:14.416278    3672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:07:14.481600    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:07:14.516242    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:07:14.541820    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:14.571157    3672 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:07:14.581160    3672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:07:14.592176    3672 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:07:14.615166    3672 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:07:14.771144    3672 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:07:14.944207    3672 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:07:14.944207    3672 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:07:14.979582    3672 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:07:15.001593    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:15.185611    3672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:07:17.763619    3672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5779688s)
	I1212 21:07:17.768709    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:07:17.795795    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:07:17.818220    3672 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:07:17.843726    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:07:17.878369    3672 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:07:18.040084    3672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:07:18.241633    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:18.397839    3672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:07:18.428400    3672 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:07:18.452393    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:18.618913    3672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:07:18.768930    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:07:18.791934    3672 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:07:18.796929    3672 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:07:18.804939    3672 start.go:564] Will wait 60s for crictl version
	I1212 21:07:18.808919    3672 ssh_runner.go:195] Run: which crictl
	I1212 21:07:18.819924    3672 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:07:18.861496    3672 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:07:18.864500    3672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:07:18.912502    3672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:07:18.956617    3672 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:07:18.959604    3672 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-716700 dig +short host.docker.internal
	I1212 21:07:19.103302    3672 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:07:19.107281    3672 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:07:19.114284    3672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:19.134173    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:19.196046    3672 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-716700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-716700 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:07:19.196046    3672 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:07:19.200042    3672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:07:19.238022    3672 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:07:19.238022    3672 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1212 21:07:19.244020    3672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 21:07:19.265032    3672 ssh_runner.go:195] Run: which lz4
	I1212 21:07:19.278031    3672 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 21:07:19.286710    3672 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:07:19.286710    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284622240 bytes)
	I1212 21:07:22.419002    3672 docker.go:655] duration metric: took 3.1459317s to copy over tarball
	I1212 21:07:22.423004    3672 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:07:25.103753    3672 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.6807075s)
	I1212 21:07:25.103753    3672 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:07:25.142671    3672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 21:07:25.155774    3672 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2660 bytes)
	I1212 21:07:25.182883    3672 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:07:25.204828    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:25.357095    3672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:07:33.811543    3672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.4543179s)
	I1212 21:07:33.816245    3672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:07:33.854617    3672 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:07:33.855621    3672 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:07:33.855621    3672 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:07:33.855621    3672 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-716700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-716700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:07:33.858623    3672 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:07:33.947611    3672 cni.go:84] Creating CNI manager for ""
	I1212 21:07:33.947611    3672 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:07:33.947611    3672 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:07:33.948614    3672 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-716700 NodeName:kubernetes-upgrade-716700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:07:33.948614    3672 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-716700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:07:33.951609    3672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:07:33.967614    3672 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:07:33.972625    3672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:07:33.987623    3672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I1212 21:07:34.008631    3672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:07:34.031303    3672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 21:07:34.054312    3672 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:07:34.063616    3672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:34.086220    3672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:34.247821    3672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:34.272824    3672 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700 for IP: 192.168.76.2
	I1212 21:07:34.272824    3672 certs.go:195] generating shared ca certs ...
	I1212 21:07:34.272824    3672 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:34.272824    3672 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:07:34.273836    3672 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:07:34.273836    3672 certs.go:257] generating profile certs ...
	I1212 21:07:34.273836    3672 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\client.key
	I1212 21:07:34.274833    3672 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\apiserver.key.3e6b3029
	I1212 21:07:34.274833    3672 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\proxy-client.key
	I1212 21:07:34.275838    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:07:34.275838    3672 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:07:34.275838    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:07:34.276839    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:07:34.276839    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:07:34.276839    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:07:34.277825    3672 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:07:34.278828    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:07:34.308830    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:07:34.337699    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:07:34.369306    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:07:34.401303    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 21:07:34.426292    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:07:34.454690    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:07:34.496238    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-716700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:07:34.533045    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:07:34.563566    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:07:34.593378    3672 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:07:34.619383    3672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:07:34.641393    3672 ssh_runner.go:195] Run: openssl version
	I1212 21:07:34.659659    3672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:07:34.681413    3672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:07:34.702510    3672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:07:34.710511    3672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:07:34.714504    3672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:07:34.761506    3672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:07:34.777506    3672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:07:34.794512    3672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:07:34.810505    3672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:07:34.817508    3672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:07:34.821504    3672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:07:34.868892    3672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:07:34.889045    3672 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:34.905116    3672 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:07:34.920120    3672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:34.927114    3672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:34.931105    3672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:34.979242    3672 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:07:34.998509    3672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:07:35.012879    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:07:35.060117    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:07:35.114126    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:07:35.175262    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:07:35.236188    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:07:35.286191    3672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:07:35.336132    3672 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-716700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-716700 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:07:35.341090    3672 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:07:35.382619    3672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:07:35.400230    3672 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:07:35.400230    3672 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:07:35.405233    3672 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:07:35.418233    3672 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:07:35.421237    3672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-716700
	I1212 21:07:35.472231    3672 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-716700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:07:35.472231    3672 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-716700" cluster setting kubeconfig missing "kubernetes-upgrade-716700" context setting]
	I1212 21:07:35.473242    3672 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:35.493230    3672 kapi.go:59] client config for kubernetes-upgrade-716700: &rest.Config{Host:"https://127.0.0.1:60369", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-716700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-716700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6c79d9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:07:35.494233    3672 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:07:35.494233    3672 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:07:35.494233    3672 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:07:35.494233    3672 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:07:35.494233    3672 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:07:35.499241    3672 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:07:35.512233    3672 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 21:06:35.137902243 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 21:07:34.039369798 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-716700"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1212 21:07:35.512233    3672 kubeadm.go:1161] stopping kube-system containers ...
	I1212 21:07:35.515228    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:07:35.549088    3672 docker.go:484] Stopping containers: [8e2fd5a5e17f 5952b5c79d6d 6193b44cbfbe a5f03d2822c4 f72062851ca7 2642b80611a0 810caaf8754f 84dfd6ad86fa]
	I1212 21:07:35.553055    3672 ssh_runner.go:195] Run: docker stop 8e2fd5a5e17f 5952b5c79d6d 6193b44cbfbe a5f03d2822c4 f72062851ca7 2642b80611a0 810caaf8754f 84dfd6ad86fa
	I1212 21:07:35.594707    3672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:07:35.623549    3672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:07:35.641020    3672 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 12 21:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 12 21:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 12 21:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 12 21:06 /etc/kubernetes/scheduler.conf
	
	I1212 21:07:35.644009    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:07:35.660010    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:07:35.678030    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:07:35.691009    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:07:35.696021    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:07:35.713033    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:07:35.726013    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:07:35.730013    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:07:35.746020    3672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:07:35.769669    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:07:35.843836    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:07:36.552224    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:07:36.795955    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:07:36.859541    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:07:36.921430    3672 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:07:36.927000    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:37.426952    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:37.926927    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:38.427045    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:38.928415    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:39.425128    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:39.927036    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:40.427151    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:40.928097    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:41.426975    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:41.926099    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:42.427730    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:42.927949    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:43.425677    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:43.926578    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:44.425843    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:44.925998    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:45.425978    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:45.925297    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:46.428572    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:46.927404    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:47.427303    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:47.925551    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:48.426564    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:48.926700    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:49.428120    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:49.926995    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:50.426688    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:50.925125    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:51.426832    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:51.925907    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:52.425493    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:52.926359    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:53.427248    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:53.926942    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:54.425817    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:54.925971    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:55.428913    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:55.926513    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:56.427147    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:56.928999    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:57.426860    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:57.926034    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:58.426589    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:58.929007    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:59.427628    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:07:59.930481    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:00.428058    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:00.926785    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:01.426946    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:01.927057    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:02.427611    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:02.927314    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:03.426480    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:03.926234    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:04.426557    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:04.928127    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:05.426464    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:05.926559    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:06.427235    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:06.928420    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:07.427237    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:07.927427    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:08.428459    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:08.928613    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:09.428280    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:09.927379    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:10.428080    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:10.926860    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:11.427898    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:11.928834    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:12.426631    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:12.927594    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:13.426052    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:13.927989    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:14.427804    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:14.927765    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:15.428062    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:15.928462    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:16.426337    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:16.927624    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:17.427617    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:17.926924    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:18.428435    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:18.929358    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:19.428229    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:19.927306    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:20.427910    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:20.927155    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:21.427397    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:21.927563    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:22.426781    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:22.927133    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:23.427016    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:23.926983    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:24.428233    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:24.929277    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:25.428990    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:25.926898    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:26.428708    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:26.927305    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:27.429308    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:27.927283    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:28.428101    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:28.927128    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:29.428903    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:29.929574    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:30.426999    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:30.928972    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:31.426886    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:31.928034    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:32.427539    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:32.926708    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:33.428068    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:33.927455    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:34.428100    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:34.927396    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:35.427769    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:35.926527    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:36.425817    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:36.928096    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:36.970082    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:36.974096    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:37.007083    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:37.011080    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:37.042076    3672 logs.go:282] 0 containers: []
	W1212 21:08:37.042076    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:37.045076    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:37.076077    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:37.080687    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:37.121401    3672 logs.go:282] 0 containers: []
	W1212 21:08:37.121485    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:37.125647    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:37.162575    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:37.169848    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:37.223285    3672 logs.go:282] 0 containers: []
	W1212 21:08:37.223285    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:37.228292    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:37.255286    3672 logs.go:282] 0 containers: []
	W1212 21:08:37.255286    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:37.255286    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:37.255286    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:37.321261    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:37.321261    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:37.378656    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:37.378656    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:37.434975    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:37.435016    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:37.478455    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:37.478455    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:37.510756    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:37.510811    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:37.556231    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:37.556231    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:37.669273    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:37.669273    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:37.669273    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:37.716279    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:37.716279    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:40.349537    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:40.380489    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:40.417935    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:40.421927    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:40.454940    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:40.457930    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:40.486933    3672 logs.go:282] 0 containers: []
	W1212 21:08:40.486933    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:40.489928    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:40.520934    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:40.523929    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:40.552937    3672 logs.go:282] 0 containers: []
	W1212 21:08:40.552937    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:40.556929    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:40.586942    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:40.590940    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:40.620044    3672 logs.go:282] 0 containers: []
	W1212 21:08:40.620044    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:40.626540    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:40.656246    3672 logs.go:282] 0 containers: []
	W1212 21:08:40.656246    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:40.656246    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:40.656246    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:40.729966    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:40.729966    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:40.773607    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:40.773607    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:40.824231    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:40.824231    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:40.904109    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:40.904170    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:40.904195    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:40.951441    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:40.951441    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:40.995108    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:40.995172    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:41.034551    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:41.034610    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:41.065083    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:41.065083    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:43.624698    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:43.649800    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:43.685504    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:43.688788    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:43.719033    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:43.722522    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:43.776417    3672 logs.go:282] 0 containers: []
	W1212 21:08:43.776417    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:43.779722    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:43.809657    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:43.813648    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:43.839875    3672 logs.go:282] 0 containers: []
	W1212 21:08:43.839875    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:43.843975    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:43.874829    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:43.879349    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:43.908619    3672 logs.go:282] 0 containers: []
	W1212 21:08:43.908619    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:43.912060    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:43.941539    3672 logs.go:282] 0 containers: []
	W1212 21:08:43.941539    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:43.941539    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:43.941539    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:43.972217    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:43.972217    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:44.036599    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:44.036599    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:44.117294    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:44.117294    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:44.117294    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:44.163801    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:44.163801    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:44.213035    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:44.213035    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:44.260724    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:44.260774    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:44.296676    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:44.296676    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:44.347024    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:44.347024    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:46.888080    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:46.907062    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:46.936654    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:46.940086    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:46.972401    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:46.976286    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:47.008440    3672 logs.go:282] 0 containers: []
	W1212 21:08:47.008440    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:47.012500    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:47.045518    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:47.049067    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:47.078238    3672 logs.go:282] 0 containers: []
	W1212 21:08:47.078238    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:47.082884    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:47.113294    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:47.117296    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:47.147463    3672 logs.go:282] 0 containers: []
	W1212 21:08:47.147463    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:47.150948    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:47.177904    3672 logs.go:282] 0 containers: []
	W1212 21:08:47.177904    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:47.177904    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:47.177904    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:47.241848    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:47.241848    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:47.283640    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:47.283640    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:47.325102    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:47.325102    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:47.359958    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:47.359958    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:47.390502    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:47.390502    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:47.446581    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:47.446628    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:47.484895    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:47.484895    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:47.564963    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:47.565063    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:47.565111    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:50.122627    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:50.142614    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:50.181612    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:50.185612    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:50.254923    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:50.260393    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:50.298389    3672 logs.go:282] 0 containers: []
	W1212 21:08:50.298389    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:50.301365    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:50.333374    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:50.337370    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:50.373814    3672 logs.go:282] 0 containers: []
	W1212 21:08:50.373814    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:50.379831    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:50.411822    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:50.414800    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:50.449800    3672 logs.go:282] 0 containers: []
	W1212 21:08:50.449800    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:50.453811    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:50.491815    3672 logs.go:282] 0 containers: []
	W1212 21:08:50.491815    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:50.491815    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:50.491815    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:50.520803    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:50.520803    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:50.572947    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:50.572947    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:50.654922    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:50.654922    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:50.692900    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:50.692900    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:50.782347    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:50.782347    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:50.782347    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:50.827737    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:50.827783    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:50.880511    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:50.880511    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:50.931512    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:50.931512    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:53.475766    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:53.499014    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:53.536954    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:53.540945    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:53.579975    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:53.585948    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:53.622502    3672 logs.go:282] 0 containers: []
	W1212 21:08:53.622502    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:53.628789    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:53.662834    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:53.667843    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:53.704004    3672 logs.go:282] 0 containers: []
	W1212 21:08:53.704004    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:53.706997    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:53.739004    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:53.743005    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:53.777013    3672 logs.go:282] 0 containers: []
	W1212 21:08:53.777013    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:53.779999    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:53.814998    3672 logs.go:282] 0 containers: []
	W1212 21:08:53.814998    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:53.814998    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:53.814998    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:53.895001    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:53.895001    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:53.946693    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:53.946693    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:54.035719    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:54.035719    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:54.035719    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:54.088717    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:54.088717    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:54.129700    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:54.129700    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:54.172452    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:54.172540    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:08:54.224430    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:54.224430    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:54.295933    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:54.295933    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:56.830378    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:08:56.862749    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:08:56.900619    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:08:56.904621    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:08:56.933751    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:08:56.937103    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:08:56.966054    3672 logs.go:282] 0 containers: []
	W1212 21:08:56.966054    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:08:56.971094    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:08:57.010614    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:08:57.014183    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:08:57.048518    3672 logs.go:282] 0 containers: []
	W1212 21:08:57.048570    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:08:57.053708    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:08:57.093911    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:08:57.096903    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:08:57.127439    3672 logs.go:282] 0 containers: []
	W1212 21:08:57.127459    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:08:57.130736    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:08:57.162598    3672 logs.go:282] 0 containers: []
	W1212 21:08:57.162598    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:08:57.162598    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:08:57.162598    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:08:57.225602    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:08:57.225602    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:08:57.262596    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:08:57.262596    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:08:57.347604    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:08:57.347604    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:08:57.347604    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:08:57.378596    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:08:57.378596    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:08:57.423597    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:08:57.423597    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:08:57.467596    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:08:57.467596    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:08:57.512160    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:08:57.512160    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:08:57.550187    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:08:57.550187    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:00.116042    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:00.138579    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:00.174855    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:00.179020    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:00.219283    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:00.222288    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:00.280447    3672 logs.go:282] 0 containers: []
	W1212 21:09:00.280447    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:00.289609    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:00.321643    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:00.325898    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:00.359186    3672 logs.go:282] 0 containers: []
	W1212 21:09:00.359251    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:00.366266    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:00.402459    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:00.406743    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:00.433200    3672 logs.go:282] 0 containers: []
	W1212 21:09:00.433200    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:00.437407    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:00.469008    3672 logs.go:282] 0 containers: []
	W1212 21:09:00.469054    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:00.469103    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:00.469150    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:00.509433    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:00.509433    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:00.563472    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:00.564476    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:00.602236    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:00.602236    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:00.650631    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:00.650631    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:00.740979    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:00.741018    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:00.741018    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:00.786056    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:00.786056    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:00.828561    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:00.828561    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:00.874888    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:00.874888    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:03.451840    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:03.508440    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:03.552039    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:03.556788    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:03.588213    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:03.592894    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:03.620462    3672 logs.go:282] 0 containers: []
	W1212 21:09:03.620567    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:03.625579    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:03.656842    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:03.661339    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:03.691353    3672 logs.go:282] 0 containers: []
	W1212 21:09:03.691353    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:03.695099    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:03.728692    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:03.732398    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:03.759756    3672 logs.go:282] 0 containers: []
	W1212 21:09:03.759756    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:03.763504    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:03.793970    3672 logs.go:282] 0 containers: []
	W1212 21:09:03.793970    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:03.793970    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:03.793970    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:03.843978    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:03.843978    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:03.907707    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:03.907707    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:03.999252    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:03.999252    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:04.046876    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:04.046876    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:04.095346    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:04.095394    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:04.137371    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:04.137371    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:04.538830    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:04.538830    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:04.647647    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:04.647647    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:04.647726    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:07.209548    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:07.242763    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:07.276395    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:07.279395    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:07.307395    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:07.310395    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:07.347823    3672 logs.go:282] 0 containers: []
	W1212 21:09:07.347823    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:07.353586    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:07.383673    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:07.386663    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:07.413609    3672 logs.go:282] 0 containers: []
	W1212 21:09:07.413609    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:07.422495    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:07.451703    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:07.455577    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:07.491660    3672 logs.go:282] 0 containers: []
	W1212 21:09:07.491712    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:07.497808    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:07.535583    3672 logs.go:282] 0 containers: []
	W1212 21:09:07.535583    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:07.535583    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:07.535583    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:07.616489    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:07.616489    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:07.686648    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:07.686648    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:07.801130    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:07.801130    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:07.801130    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:07.855096    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:07.856104    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:07.903311    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:07.903311    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:07.937686    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:07.937686    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:07.979611    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:07.979611    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:08.037262    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:08.037262    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:10.605604    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:10.629669    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:10.664671    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:10.668748    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:10.701168    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:10.705223    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:10.758478    3672 logs.go:282] 0 containers: []
	W1212 21:09:10.758478    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:10.764129    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:10.796029    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:10.800642    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:10.832366    3672 logs.go:282] 0 containers: []
	W1212 21:09:10.832433    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:10.837172    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:10.875351    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:10.880352    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:10.914336    3672 logs.go:282] 0 containers: []
	W1212 21:09:10.914336    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:10.918801    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:10.949840    3672 logs.go:282] 0 containers: []
	W1212 21:09:10.949840    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:10.949840    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:10.949840    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:10.994767    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:10.994767    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:11.088098    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:11.088098    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:11.088098    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:11.141823    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:11.141823    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:11.189618    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:11.189618    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:11.232771    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:11.232771    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:11.307160    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:11.307160    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:11.358751    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:11.358751    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:11.396846    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:11.396846    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:13.954783    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:13.978405    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:14.007594    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:14.013574    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:14.052878    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:14.056876    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:14.089071    3672 logs.go:282] 0 containers: []
	W1212 21:09:14.089071    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:14.093072    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:14.127398    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:14.132087    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:14.158162    3672 logs.go:282] 0 containers: []
	W1212 21:09:14.158162    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:14.161164    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:14.191160    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:14.194156    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:14.223236    3672 logs.go:282] 0 containers: []
	W1212 21:09:14.223236    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:14.227854    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:14.256602    3672 logs.go:282] 0 containers: []
	W1212 21:09:14.256602    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:14.256602    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:14.256602    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:14.304249    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:14.304249    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:14.348400    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:14.348400    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:14.381821    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:14.381821    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:14.441159    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:14.441159    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:14.489753    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:14.490742    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:14.525140    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:14.525228    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:14.594845    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:14.594845    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:14.632847    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:14.632847    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:14.720108    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:17.224050    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:17.244046    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:17.277138    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:17.280114    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:17.310107    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:17.313106    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:17.344926    3672 logs.go:282] 0 containers: []
	W1212 21:09:17.344926    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:17.350560    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:17.383617    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:17.387252    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:17.420772    3672 logs.go:282] 0 containers: []
	W1212 21:09:17.420772    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:17.424643    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:17.457657    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:17.461655    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:17.493225    3672 logs.go:282] 0 containers: []
	W1212 21:09:17.493225    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:17.496226    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:17.523235    3672 logs.go:282] 0 containers: []
	W1212 21:09:17.523235    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:17.523235    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:17.523235    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:17.591072    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:17.591072    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:17.634138    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:17.634138    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:17.682006    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:17.682006    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:17.717494    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:17.717494    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:17.802768    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:17.802768    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:17.802768    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:17.850634    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:17.850634    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:17.893474    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:17.894041    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:17.928656    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:17.928656    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:20.485119    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:20.513001    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:20.542382    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:20.546046    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:20.583879    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:20.588340    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:20.623223    3672 logs.go:282] 0 containers: []
	W1212 21:09:20.623223    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:20.627222    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:20.662223    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:20.667235    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:20.701247    3672 logs.go:282] 0 containers: []
	W1212 21:09:20.701247    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:20.705225    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:20.737226    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:20.741229    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:20.787232    3672 logs.go:282] 0 containers: []
	W1212 21:09:20.787232    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:20.792230    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:20.826412    3672 logs.go:282] 0 containers: []
	W1212 21:09:20.826412    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:20.826412    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:20.826412    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:20.884994    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:20.884994    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:20.923981    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:20.923981    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:21.018579    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:21.018579    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:21.018579    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:21.069593    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:21.069593    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:21.145579    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:21.145579    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:21.198566    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:21.198566    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:21.287958    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:21.287958    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:21.322839    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:21.322839    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:23.862903    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:23.886809    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:23.919257    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:23.923404    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:23.961519    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:23.967487    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:24.005442    3672 logs.go:282] 0 containers: []
	W1212 21:09:24.005442    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:24.008432    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:24.045399    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:24.049810    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:24.083168    3672 logs.go:282] 0 containers: []
	W1212 21:09:24.083168    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:24.086896    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:24.122297    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:24.125713    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:24.154642    3672 logs.go:282] 0 containers: []
	W1212 21:09:24.154642    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:24.159017    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:24.194835    3672 logs.go:282] 0 containers: []
	W1212 21:09:24.194835    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:24.194835    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:24.194835    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:24.233821    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:24.233821    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:24.291386    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:24.291386    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:24.334380    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:24.334380    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:24.377711    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:24.377711    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:24.408264    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:24.408264    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:24.460333    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:24.460333    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:24.540243    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:24.540243    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:24.540327    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:24.580422    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:24.580422    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:27.146592    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:27.173057    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:27.212839    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:27.218252    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:27.272052    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:27.275044    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:27.308062    3672 logs.go:282] 0 containers: []
	W1212 21:09:27.308062    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:27.311049    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:27.342045    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:27.345053    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:27.375047    3672 logs.go:282] 0 containers: []
	W1212 21:09:27.375047    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:27.378048    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:27.408538    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:27.413576    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:27.444569    3672 logs.go:282] 0 containers: []
	W1212 21:09:27.444628    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:27.448620    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:27.484353    3672 logs.go:282] 0 containers: []
	W1212 21:09:27.484353    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:27.484353    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:27.484353    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:27.555365    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:27.555365    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:27.636368    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:27.636368    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:27.636368    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:27.681771    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:27.681835    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:27.725860    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:27.725860    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:27.761868    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:27.761868    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:27.802538    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:27.802538    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:27.851532    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:27.851532    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:27.893531    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:27.893531    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:30.452284    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:30.476248    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:30.510699    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:30.514940    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:30.555101    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:30.559267    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:30.590871    3672 logs.go:282] 0 containers: []
	W1212 21:09:30.590951    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:30.596735    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:30.630224    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:30.634392    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:30.667384    3672 logs.go:282] 0 containers: []
	W1212 21:09:30.667384    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:30.674038    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:30.706310    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:30.711004    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:30.747627    3672 logs.go:282] 0 containers: []
	W1212 21:09:30.747627    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:30.755933    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:30.784709    3672 logs.go:282] 0 containers: []
	W1212 21:09:30.784709    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:30.784709    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:30.784709    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:30.817655    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:30.817655    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:30.895703    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:30.895703    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:30.933953    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:30.933953    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:30.984240    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:30.984240    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:31.034754    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:31.034754    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:31.081289    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:31.081340    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:31.144240    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:31.144240    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:31.245996    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:31.245996    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:31.245996    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:33.809907    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:33.833529    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:33.868564    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:33.872972    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:33.902835    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:33.905840    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:33.935954    3672 logs.go:282] 0 containers: []
	W1212 21:09:33.935954    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:33.943426    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:33.972411    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:33.975417    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:34.006006    3672 logs.go:282] 0 containers: []
	W1212 21:09:34.006006    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:34.013536    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:34.046705    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:34.050355    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:34.079245    3672 logs.go:282] 0 containers: []
	W1212 21:09:34.079245    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:34.083509    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:34.111595    3672 logs.go:282] 0 containers: []
	W1212 21:09:34.111595    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:34.111595    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:34.111595    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:34.177657    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:34.177657    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:34.218628    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:34.218628    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:34.260260    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:34.260260    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:34.293632    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:34.293632    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:34.341844    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:34.341905    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:34.425885    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:34.425885    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:34.425885    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:34.476709    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:34.476709    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:34.524073    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:34.524073    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:37.073525    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:37.096842    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:37.130094    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:37.133557    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:37.165908    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:37.170028    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:37.198771    3672 logs.go:282] 0 containers: []
	W1212 21:09:37.198771    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:37.202552    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:37.234680    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:37.238063    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:37.266529    3672 logs.go:282] 0 containers: []
	W1212 21:09:37.266529    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:37.270723    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:37.301008    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:37.305268    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:37.336775    3672 logs.go:282] 0 containers: []
	W1212 21:09:37.336775    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:37.344318    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:37.374091    3672 logs.go:282] 0 containers: []
	W1212 21:09:37.374091    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:37.374091    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:37.374091    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:37.434944    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:37.434944    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:37.483336    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:37.483336    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:37.523108    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:37.523108    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:37.573381    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:37.573458    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:37.609133    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:37.609133    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:37.690336    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:37.690508    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:37.690543    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:37.767283    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:37.767808    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:37.801949    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:37.801987    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:40.337931    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:40.360897    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:40.393445    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:40.397678    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:40.428767    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:40.432773    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:40.463784    3672 logs.go:282] 0 containers: []
	W1212 21:09:40.463784    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:40.467769    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:40.498623    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:40.502679    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:40.536299    3672 logs.go:282] 0 containers: []
	W1212 21:09:40.536299    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:40.540303    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:40.573304    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:40.577297    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:40.607537    3672 logs.go:282] 0 containers: []
	W1212 21:09:40.607537    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:40.612224    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:40.641267    3672 logs.go:282] 0 containers: []
	W1212 21:09:40.641267    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:40.641267    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:40.641267    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:40.698331    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:40.698331    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:40.739301    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:40.739301    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:40.780312    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:40.780312    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:40.810305    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:40.810305    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:40.875308    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:40.875308    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:40.916690    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:40.916690    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:40.990273    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:40.990329    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:41.030470    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:41.030470    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:41.117974    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:43.622789    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:43.646251    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:43.679952    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:43.684928    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:43.716938    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:43.720934    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:43.769930    3672 logs.go:282] 0 containers: []
	W1212 21:09:43.769930    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:43.772944    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:43.801947    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:43.804948    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:43.833115    3672 logs.go:282] 0 containers: []
	W1212 21:09:43.833115    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:43.836869    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:43.865914    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:43.869917    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:43.900616    3672 logs.go:282] 0 containers: []
	W1212 21:09:43.900616    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:43.905932    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:43.934466    3672 logs.go:282] 0 containers: []
	W1212 21:09:43.934466    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:43.934466    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:43.934466    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:43.970469    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:43.970469    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:44.051943    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:44.052013    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:44.052013    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:44.098439    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:44.098439    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:44.144442    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:44.144442    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:44.181113    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:44.181113    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:44.246341    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:44.246341    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:44.300092    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:44.300159    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:44.332372    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:44.332372    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:46.893976    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.916533    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:46.946987    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:46.950934    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:46.985299    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:46.988655    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:47.021016    3672 logs.go:282] 0 containers: []
	W1212 21:09:47.021016    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:47.024377    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:47.058447    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:47.062414    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:47.092362    3672 logs.go:282] 0 containers: []
	W1212 21:09:47.092362    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:47.098848    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:47.129652    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:47.133671    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:47.162662    3672 logs.go:282] 0 containers: []
	W1212 21:09:47.162662    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:47.165651    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:47.197662    3672 logs.go:282] 0 containers: []
	W1212 21:09:47.197662    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:47.197662    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:47.197662    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:47.258653    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:47.258653    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:47.311152    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:47.311152    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:47.358030    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:47.358030    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:47.390623    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:47.390623    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:47.444327    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:47.444327    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:47.485155    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:47.485155    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:47.564814    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:47.564814    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:47.564814    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:47.606618    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:47.606618    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:50.147386    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:50.171755    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:50.216984    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:50.221632    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:50.257408    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:50.261837    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:50.301195    3672 logs.go:282] 0 containers: []
	W1212 21:09:50.301261    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:50.306240    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:50.338229    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:50.343514    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:50.374386    3672 logs.go:282] 0 containers: []
	W1212 21:09:50.374446    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:50.380247    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:50.416072    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:50.420488    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:50.462172    3672 logs.go:282] 0 containers: []
	W1212 21:09:50.462172    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:50.467530    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:50.497290    3672 logs.go:282] 0 containers: []
	W1212 21:09:50.497290    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:50.497290    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:50.497290    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:50.565000    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:50.565000    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:50.649938    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:50.649938    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:50.649938    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:50.691446    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:50.691446    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:50.732064    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:50.732064    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:50.783324    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:50.783390    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:50.820421    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:50.820421    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:50.872970    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:50.872970    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:50.915648    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:50.915741    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:53.477319    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:53.500439    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:53.536169    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:53.539975    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:53.571301    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:53.574553    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:53.605160    3672 logs.go:282] 0 containers: []
	W1212 21:09:53.605160    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:53.615357    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:53.649309    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:53.653785    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:53.681193    3672 logs.go:282] 0 containers: []
	W1212 21:09:53.681193    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:53.685156    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:53.717740    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:53.721516    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:53.753377    3672 logs.go:282] 0 containers: []
	W1212 21:09:53.753406    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:53.757005    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:53.783786    3672 logs.go:282] 0 containers: []
	W1212 21:09:53.783786    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:53.783786    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:53.783786    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:53.832359    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:53.832359    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:53.882193    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:53.882260    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:53.946833    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:53.946833    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:53.992239    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:53.992239    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:54.034857    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:54.034857    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:54.072665    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:54.072665    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:54.105268    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:54.105268    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:09:54.140427    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:54.141430    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:54.260494    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:56.765743    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:56.792262    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:09:56.828231    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:09:56.831204    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:09:56.866936    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:09:56.870935    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:09:56.902371    3672 logs.go:282] 0 containers: []
	W1212 21:09:56.902371    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:09:56.906197    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:09:56.939075    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:09:56.942986    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:09:56.975878    3672 logs.go:282] 0 containers: []
	W1212 21:09:56.976408    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:09:56.981096    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:09:57.009740    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:09:57.012759    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:09:57.041343    3672 logs.go:282] 0 containers: []
	W1212 21:09:57.041343    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:09:57.045066    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:09:57.079529    3672 logs.go:282] 0 containers: []
	W1212 21:09:57.079529    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:09:57.079529    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:09:57.079529    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:09:57.136070    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:09:57.136070    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:09:57.178740    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:09:57.178740    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:09:57.221763    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:09:57.221763    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:09:57.284312    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:09:57.284312    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:09:57.362940    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:09:57.362940    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:09:57.362940    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:09:57.410682    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:09:57.410682    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:09:57.440114    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:09:57.440114    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:09:57.512243    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:09:57.512243    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:00.054193    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:00.073540    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:00.108547    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:00.113536    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:00.150546    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:00.153540    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:00.189550    3672 logs.go:282] 0 containers: []
	W1212 21:10:00.189550    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:00.193546    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:00.224019    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:00.228015    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:00.267108    3672 logs.go:282] 0 containers: []
	W1212 21:10:00.267108    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:00.271448    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:00.303418    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:00.307345    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:00.336920    3672 logs.go:282] 0 containers: []
	W1212 21:10:00.336920    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:00.339893    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:00.372109    3672 logs.go:282] 0 containers: []
	W1212 21:10:00.372109    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:00.372109    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:00.373105    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:00.436085    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:00.436085    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:00.480093    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:00.480093    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:00.523972    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:00.523972    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:00.568972    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:00.568972    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:00.610285    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:00.610285    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:00.646314    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:00.646314    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:00.730439    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:00.730439    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:00.730439    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:00.760445    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:00.760445    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:03.315483    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:03.339272    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:03.373762    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:03.377916    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:03.409846    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:03.416138    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:03.456877    3672 logs.go:282] 0 containers: []
	W1212 21:10:03.456877    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:03.465758    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:03.503338    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:03.507287    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:03.538495    3672 logs.go:282] 0 containers: []
	W1212 21:10:03.538495    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:03.545786    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:03.585109    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:03.589725    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:03.627159    3672 logs.go:282] 0 containers: []
	W1212 21:10:03.627159    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:03.632765    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:03.664549    3672 logs.go:282] 0 containers: []
	W1212 21:10:03.664549    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:03.664549    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:03.664549    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:03.704077    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:03.704077    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:03.765330    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:03.765330    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:03.812957    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:03.812957    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:03.847644    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:03.847644    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:03.907771    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:03.907771    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:03.948378    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:03.948378    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:04.031526    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:04.031526    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:04.031526    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:04.081427    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:04.081427    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:06.635250    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:06.660257    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:06.694087    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:06.698178    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:06.728630    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:06.732786    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:06.764983    3672 logs.go:282] 0 containers: []
	W1212 21:10:06.764983    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:06.768982    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:06.797985    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:06.800989    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:06.834440    3672 logs.go:282] 0 containers: []
	W1212 21:10:06.834440    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:06.837926    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:06.870200    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:06.875047    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:06.909636    3672 logs.go:282] 0 containers: []
	W1212 21:10:06.909636    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:06.915888    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:06.948978    3672 logs.go:282] 0 containers: []
	W1212 21:10:06.948978    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:06.948978    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:06.948978    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:07.017696    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:07.017696    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:07.056053    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:07.056053    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:07.144744    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:07.144744    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:07.144744    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:07.194288    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:07.194288    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:07.234291    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:07.234291    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:07.276773    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:07.276773    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:07.312070    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:07.312070    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:07.345399    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:07.345424    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:09.904259    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:09.930253    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:09.965007    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:09.970075    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:10.006390    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:10.010533    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:10.040089    3672 logs.go:282] 0 containers: []
	W1212 21:10:10.040089    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:10.044116    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:10.078397    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:10.083045    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:10.113560    3672 logs.go:282] 0 containers: []
	W1212 21:10:10.113560    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:10.119551    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:10.161159    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:10.165189    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:10.197565    3672 logs.go:282] 0 containers: []
	W1212 21:10:10.197565    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:10.200565    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:10.233571    3672 logs.go:282] 0 containers: []
	W1212 21:10:10.233571    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:10.233571    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:10.233571    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:10.269751    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:10.269751    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:10.367927    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:10.367927    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:10.367927    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:10.419822    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:10.420346    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:10.469012    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:10.469012    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:10.515734    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:10.515734    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:10.549158    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:10.549238    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:10.579634    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:10.579634    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:10.641693    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:10.641693    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:13.206199    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:13.230621    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:13.266200    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:13.270287    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:13.305280    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:13.311371    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:13.338526    3672 logs.go:282] 0 containers: []
	W1212 21:10:13.338526    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:13.341887    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:13.374762    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:13.377751    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:13.409864    3672 logs.go:282] 0 containers: []
	W1212 21:10:13.409864    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:13.414257    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:13.450875    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:13.454916    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:13.486341    3672 logs.go:282] 0 containers: []
	W1212 21:10:13.486341    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:13.490011    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:13.524647    3672 logs.go:282] 0 containers: []
	W1212 21:10:13.524647    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:13.524647    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:13.524647    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:13.563887    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:13.563887    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:13.647901    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:13.647901    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:13.647901    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:13.693152    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:13.693152    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:13.760002    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:13.760002    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:13.803569    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:13.803569    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:13.840388    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:13.840388    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:13.871409    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:13.871409    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:13.925497    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:13.926080    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:16.497699    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:16.520405    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:16.557364    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:16.561007    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:16.592583    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:16.596801    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:16.633265    3672 logs.go:282] 0 containers: []
	W1212 21:10:16.633265    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:16.637471    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:16.677214    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:16.681702    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:16.711559    3672 logs.go:282] 0 containers: []
	W1212 21:10:16.711559    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:16.714553    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:16.765104    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:16.769817    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:16.801398    3672 logs.go:282] 0 containers: []
	W1212 21:10:16.801398    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:16.805470    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:16.834542    3672 logs.go:282] 0 containers: []
	W1212 21:10:16.834542    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:16.834542    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:16.834542    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:16.907853    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:16.907853    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:16.988114    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:16.988114    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:16.988114    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:17.028307    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:17.028307    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:17.083011    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:17.083043    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:17.120939    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:17.120939    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:17.178001    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:17.178001    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:17.224029    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:17.224029    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:17.265251    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:17.265251    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:19.800992    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:19.911985    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:19.947663    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:19.952897    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:19.982045    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:19.985917    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:20.015775    3672 logs.go:282] 0 containers: []
	W1212 21:10:20.015775    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:20.018771    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:20.052815    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:20.058312    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:20.088142    3672 logs.go:282] 0 containers: []
	W1212 21:10:20.088142    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:20.092608    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:20.128522    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:20.132851    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:20.168815    3672 logs.go:282] 0 containers: []
	W1212 21:10:20.168815    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:20.172823    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:20.199830    3672 logs.go:282] 0 containers: []
	W1212 21:10:20.199830    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:20.199830    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:20.199830    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:20.240923    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:20.240923    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:20.303500    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:20.303500    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:20.381313    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:20.381313    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:20.428315    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:20.428315    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:20.464320    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:20.464320    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:20.496313    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:20.496313    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:20.534398    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:20.534398    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:20.616230    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:20.616230    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:20.616230    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:23.164817    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:23.189632    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:23.216923    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:23.221674    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:23.254575    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:23.260388    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:23.290440    3672 logs.go:282] 0 containers: []
	W1212 21:10:23.290486    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:23.295472    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:23.323120    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:23.327118    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:23.364700    3672 logs.go:282] 0 containers: []
	W1212 21:10:23.364700    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:23.371113    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:23.404240    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:23.407442    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:23.436387    3672 logs.go:282] 0 containers: []
	W1212 21:10:23.436416    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:23.440064    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:23.475314    3672 logs.go:282] 0 containers: []
	W1212 21:10:23.475373    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:23.475373    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:23.475373    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:23.509769    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:23.509769    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:23.600118    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:23.600118    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:23.600118    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:23.641751    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:23.641751    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:23.686515    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:23.686515    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:23.766509    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:23.766509    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:23.814067    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:23.814067    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:23.866625    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:23.866625    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:23.906951    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:23.906951    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:26.475924    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:26.500325    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:26.529401    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:26.533488    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:26.572364    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:26.579141    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:26.611392    3672 logs.go:282] 0 containers: []
	W1212 21:10:26.611439    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:26.616465    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:26.658707    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:26.663880    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:26.696358    3672 logs.go:282] 0 containers: []
	W1212 21:10:26.696358    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:26.701996    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:26.732502    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:26.736505    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:26.768915    3672 logs.go:282] 0 containers: []
	W1212 21:10:26.768915    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:26.772926    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:26.806784    3672 logs.go:282] 0 containers: []
	W1212 21:10:26.806784    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:26.806784    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:26.806784    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:26.882883    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:26.882883    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:26.920885    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:26.920885    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:27.007595    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:27.007595    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:27.007595    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:27.056503    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:27.056503    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:27.113556    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:27.113581    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:27.171099    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:27.171099    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:27.223069    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:27.223069    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:27.267588    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:27.267641    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:29.810270    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:29.833517    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:29.869149    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:29.873526    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:29.909575    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:29.915583    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:29.947765    3672 logs.go:282] 0 containers: []
	W1212 21:10:29.947917    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:29.951830    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:29.987042    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:29.990878    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:30.019302    3672 logs.go:282] 0 containers: []
	W1212 21:10:30.019302    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:30.025095    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:30.055536    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:30.062108    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:30.099955    3672 logs.go:282] 0 containers: []
	W1212 21:10:30.099955    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:30.104919    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:30.137959    3672 logs.go:282] 0 containers: []
	W1212 21:10:30.138001    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:30.138001    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:30.138052    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:30.217728    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:30.217728    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:30.259633    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:30.260212    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:30.305225    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:30.305225    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:30.360636    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:30.360636    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:30.453486    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:30.453546    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:30.453596    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:30.507910    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:30.507910    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:30.558780    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:30.558780    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:30.603958    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:30.603958    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:33.146207    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.171414    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:33.205883    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:33.209366    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:33.241688    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:33.245109    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:33.277596    3672 logs.go:282] 0 containers: []
	W1212 21:10:33.277641    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:33.281963    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:33.316112    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:33.320353    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:33.356312    3672 logs.go:282] 0 containers: []
	W1212 21:10:33.356385    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:33.360487    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:33.393144    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:33.401133    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:33.433696    3672 logs.go:282] 0 containers: []
	W1212 21:10:33.433696    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:33.438917    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:33.468506    3672 logs.go:282] 0 containers: []
	W1212 21:10:33.468506    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:33.468506    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:33.468506    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:33.512605    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:33.512605    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:33.547246    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:33.547246    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:33.587143    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:33.587143    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:33.632663    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:33.632663    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:33.675417    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:33.675417    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:33.757296    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:33.757359    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:33.827550    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:33.827550    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:33.915671    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:33.915671    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:33.915671    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:36.472132    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.497405    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:36.527961    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:36.531912    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:36.565288    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:36.568282    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:36.600078    3672 logs.go:282] 0 containers: []
	W1212 21:10:36.600078    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:36.603757    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:36.641249    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:36.645238    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:36.678231    3672 logs.go:282] 0 containers: []
	W1212 21:10:36.678231    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:36.681228    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:36.713220    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:36.716221    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:36.767229    3672 logs.go:282] 0 containers: []
	W1212 21:10:36.767229    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:36.770235    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:36.799231    3672 logs.go:282] 0 containers: []
	W1212 21:10:36.799231    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:36.799231    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:36.799231    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:36.847446    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:36.847446    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:36.892998    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:36.892998    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:36.925998    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:36.925998    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:36.979000    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:36.979000    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:37.040008    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:37.040008    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:37.081005    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:37.081005    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:37.171609    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:37.171609    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:37.171609    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:37.214597    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:37.214597    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:39.757098    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:39.784525    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:39.826275    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:39.830278    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:39.859204    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:39.863283    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:39.893804    3672 logs.go:282] 0 containers: []
	W1212 21:10:39.893804    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:39.899191    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:39.928025    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:39.931324    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:39.961949    3672 logs.go:282] 0 containers: []
	W1212 21:10:39.961949    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:39.966429    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:40.001810    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:40.006231    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:40.037226    3672 logs.go:282] 0 containers: []
	W1212 21:10:40.037226    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:40.041248    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:40.074046    3672 logs.go:282] 0 containers: []
	W1212 21:10:40.074046    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:40.074046    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:40.074046    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:40.129625    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:40.129625    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:40.167885    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:40.167885    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:40.248377    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:40.248377    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:40.248377    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:40.294558    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:40.294558    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:40.342880    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:40.342981    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:40.375929    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:40.376503    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:40.446034    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:40.446034    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:40.497793    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:40.497793    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:43.048984    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:43.070982    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:43.105790    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:43.109565    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:43.148481    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:43.154784    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:43.187687    3672 logs.go:282] 0 containers: []
	W1212 21:10:43.187687    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:43.191664    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:43.221679    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:43.225665    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:43.255674    3672 logs.go:282] 0 containers: []
	W1212 21:10:43.255674    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:43.259663    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:43.291674    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:43.295658    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:43.326665    3672 logs.go:282] 0 containers: []
	W1212 21:10:43.326665    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:43.329664    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:43.356667    3672 logs.go:282] 0 containers: []
	W1212 21:10:43.356667    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:43.356667    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:43.356667    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:43.396669    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:43.397677    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:43.437663    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:43.437663    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:43.480666    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:43.480666    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:43.528661    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:43.528661    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:43.582664    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:43.582664    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:43.624263    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:43.624263    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:43.702841    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:43.702880    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:43.774953    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:43.775943    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:43.864949    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:46.370935    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:46.400793    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:46.439076    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:46.443083    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:46.482074    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:46.485080    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:46.514854    3672 logs.go:282] 0 containers: []
	W1212 21:10:46.514919    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:46.518599    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:46.552826    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:46.558162    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:46.589122    3672 logs.go:282] 0 containers: []
	W1212 21:10:46.589165    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:46.593173    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:46.627741    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:46.632123    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:46.667483    3672 logs.go:282] 0 containers: []
	W1212 21:10:46.667574    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:46.672861    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:46.703445    3672 logs.go:282] 0 containers: []
	W1212 21:10:46.703445    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:46.703445    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:46.703445    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:46.758732    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:46.758732    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:46.881800    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:46.881800    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:46.956722    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:46.956722    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:46.998825    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:46.998825    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:47.087597    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:47.087691    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:47.087691    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:47.134739    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:47.134739    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:47.177101    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:47.177637    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:47.210552    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:47.210552    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:49.789081    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:50.086714    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:50.131080    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:50.137292    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:50.175557    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:50.178561    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:50.210552    3672 logs.go:282] 0 containers: []
	W1212 21:10:50.210552    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:50.213547    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:50.268481    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:50.273416    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:50.307037    3672 logs.go:282] 0 containers: []
	W1212 21:10:50.307037    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:50.313170    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:50.344125    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:50.347140    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:50.380131    3672 logs.go:282] 0 containers: []
	W1212 21:10:50.380131    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:50.384119    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:50.412499    3672 logs.go:282] 0 containers: []
	W1212 21:10:50.412499    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:50.412499    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:50.412499    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:50.447940    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:50.447940    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:50.488943    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:50.488943    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:50.574239    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:50.574297    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:50.574350    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:50.638429    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:50.638429    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:50.683699    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:50.683699    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:50.723045    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:50.723100    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:50.767041    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:50.767041    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:50.994601    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:50.994601    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:53.562073    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:53.581072    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:53.609073    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:53.613068    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:53.640080    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:53.643079    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:53.678220    3672 logs.go:282] 0 containers: []
	W1212 21:10:53.678270    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:53.684109    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:53.735088    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:53.739090    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:53.775085    3672 logs.go:282] 0 containers: []
	W1212 21:10:53.775085    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:53.778082    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:53.809080    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:53.812086    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:53.845939    3672 logs.go:282] 0 containers: []
	W1212 21:10:53.845939    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:53.849952    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:53.879933    3672 logs.go:282] 0 containers: []
	W1212 21:10:53.879933    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:53.879933    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:53.879933    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:53.951936    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:53.951936    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:54.032949    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:54.032949    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:54.032949    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:54.095224    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:54.095264    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:54.145602    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:54.145602    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:10:54.184081    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:54.184081    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:54.214021    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:54.214021    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:54.271389    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:54.271389    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:54.347383    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:54.348374    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:56.910493    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:56.934881    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:10:56.970325    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:10:56.974307    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:10:57.004892    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:10:57.008732    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:10:57.036253    3672 logs.go:282] 0 containers: []
	W1212 21:10:57.036253    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:10:57.040109    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:10:57.070834    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:10:57.074816    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:10:57.106821    3672 logs.go:282] 0 containers: []
	W1212 21:10:57.106821    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:10:57.111104    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:10:57.140147    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:10:57.144032    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:10:57.175586    3672 logs.go:282] 0 containers: []
	W1212 21:10:57.175586    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:10:57.179507    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:10:57.208736    3672 logs.go:282] 0 containers: []
	W1212 21:10:57.208736    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:10:57.208736    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:10:57.208736    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:10:57.251293    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:10:57.251370    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:10:57.324903    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:10:57.324903    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:10:57.376127    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:10:57.376127    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:10:57.424117    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:10:57.424117    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:10:57.468114    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:10:57.468114    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:10:57.534110    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:10:57.535121    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:10:57.658133    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:10:57.658133    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:10:57.658133    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:10:57.708125    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:10:57.708125    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:00.273639    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:00.299745    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:00.337460    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:00.341572    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:00.374547    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:00.378558    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:00.412549    3672 logs.go:282] 0 containers: []
	W1212 21:11:00.412549    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:00.418558    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:00.456546    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:00.459550    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:00.489695    3672 logs.go:282] 0 containers: []
	W1212 21:11:00.489695    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:00.493453    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:00.526155    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:00.531965    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:00.563213    3672 logs.go:282] 0 containers: []
	W1212 21:11:00.563213    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:00.567717    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:00.597455    3672 logs.go:282] 0 containers: []
	W1212 21:11:00.597531    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:00.597568    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:00.597568    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:00.691945    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:00.691945    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:00.691945    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:00.771684    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:00.771684    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:00.825203    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:00.826193    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:00.873628    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:00.873628    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:00.937571    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:00.937571    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:01.003572    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:01.004573    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:01.045463    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:01.045463    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:01.098008    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:01.098008    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:03.645727    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:03.669292    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:03.705703    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:03.708702    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:03.755862    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:03.758864    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:03.794783    3672 logs.go:282] 0 containers: []
	W1212 21:11:03.794841    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:03.799688    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:03.831556    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:03.836634    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:03.866400    3672 logs.go:282] 0 containers: []
	W1212 21:11:03.866400    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:03.869335    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:03.901407    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:03.905108    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:03.936583    3672 logs.go:282] 0 containers: []
	W1212 21:11:03.936583    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:03.939584    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:03.973194    3672 logs.go:282] 0 containers: []
	W1212 21:11:03.973194    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:03.973194    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:03.973194    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:04.017860    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:04.017924    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:04.062125    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:04.062125    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:04.091110    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:04.091110    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:04.154087    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:04.154614    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:04.194460    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:04.194460    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:04.282918    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:04.282918    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:04.282918    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:04.328945    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:04.328945    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:04.367729    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:04.367729    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:06.926336    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:06.947051    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:06.980938    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:06.984794    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:07.016500    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:07.020223    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:07.050951    3672 logs.go:282] 0 containers: []
	W1212 21:11:07.050976    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:07.054635    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:07.086493    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:07.090250    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:07.119603    3672 logs.go:282] 0 containers: []
	W1212 21:11:07.119603    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:07.123618    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:07.156250    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:07.160157    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:07.191926    3672 logs.go:282] 0 containers: []
	W1212 21:11:07.192000    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:07.196259    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:07.225799    3672 logs.go:282] 0 containers: []
	W1212 21:11:07.225799    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:07.225799    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:07.225799    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:07.271813    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:07.271813    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:07.306121    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:07.306121    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:07.357711    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:07.358234    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:07.396387    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:07.396387    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:07.479334    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:07.479334    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:07.479334    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:07.523643    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:07.523643    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:07.554197    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:07.554223    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:07.619180    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:07.619180    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:10.171635    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.196630    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:10.268562    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:10.272560    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:10.310582    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:10.313558    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:10.349571    3672 logs.go:282] 0 containers: []
	W1212 21:11:10.349571    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:10.352565    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:10.382569    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:10.387565    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:10.421580    3672 logs.go:282] 0 containers: []
	W1212 21:11:10.421580    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:10.425560    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:10.486572    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:10.492583    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:10.533570    3672 logs.go:282] 0 containers: []
	W1212 21:11:10.533570    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:10.537572    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:10.567572    3672 logs.go:282] 0 containers: []
	W1212 21:11:10.568571    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:10.568571    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:10.568571    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:10.634566    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:10.634566    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:10.714583    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:10.714583    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:10.714583    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:10.757562    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:10.757562    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:10.794850    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:10.794850    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:10.828326    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:10.828326    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:10.864393    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:10.864393    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:10.912391    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:10.912391    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:10.956233    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:10.956233    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:13.519853    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:13.539856    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:13.574380    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:13.578617    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:13.613203    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:13.617073    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:13.650484    3672 logs.go:282] 0 containers: []
	W1212 21:11:13.650563    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:13.657036    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:13.686893    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:13.690842    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:13.720939    3672 logs.go:282] 0 containers: []
	W1212 21:11:13.720984    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:13.725916    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:13.758586    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:13.763052    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:13.795217    3672 logs.go:282] 0 containers: []
	W1212 21:11:13.795217    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:13.800664    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:13.834067    3672 logs.go:282] 0 containers: []
	W1212 21:11:13.834067    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:13.834067    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:13.834067    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:13.912999    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:13.912999    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:13.952832    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:13.952832    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:14.046009    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:14.046009    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:14.046009    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:14.096397    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:14.096397    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:14.138970    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:14.138970    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:14.174970    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:14.174970    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:14.244226    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:14.244226    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:14.291215    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:14.291215    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:16.847012    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:16.870510    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:16.909505    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:16.913293    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:16.943947    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:16.948403    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:16.980286    3672 logs.go:282] 0 containers: []
	W1212 21:11:16.980286    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:16.983282    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:17.015291    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:17.019288    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:17.054274    3672 logs.go:282] 0 containers: []
	W1212 21:11:17.054274    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:17.057272    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:17.088085    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:17.092084    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:17.123086    3672 logs.go:282] 0 containers: []
	W1212 21:11:17.123086    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:17.126083    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:17.161134    3672 logs.go:282] 0 containers: []
	W1212 21:11:17.161134    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:17.161134    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:17.161134    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:17.200143    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:17.200143    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:17.268876    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:17.268876    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:17.297852    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:17.297852    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:17.357469    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:17.357469    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:17.423726    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:17.423726    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:17.502211    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:17.502732    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:17.502732    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:17.548858    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:17.548858    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:17.593989    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:17.593989    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:20.142612    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:20.168923    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:20.204487    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:20.208501    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:20.255120    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:20.258108    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:20.286110    3672 logs.go:282] 0 containers: []
	W1212 21:11:20.286110    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:20.289119    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:20.316113    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:20.319109    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:20.348702    3672 logs.go:282] 0 containers: []
	W1212 21:11:20.348702    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:20.353275    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:20.382108    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:20.386099    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:20.414276    3672 logs.go:282] 0 containers: []
	W1212 21:11:20.414276    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:20.418283    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:20.448867    3672 logs.go:282] 0 containers: []
	W1212 21:11:20.448867    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:20.448867    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:20.448867    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:20.513889    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:20.514856    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:20.556753    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:20.556753    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:20.643345    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:20.643345    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:20.643345    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:20.694842    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:20.694842    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:20.745821    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:20.745821    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:20.791748    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:20.791793    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:20.820689    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:20.820689    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:20.870546    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:20.870582    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:23.408512    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:23.430737    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:23.468383    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:23.472880    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:23.508216    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:23.511202    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:23.541057    3672 logs.go:282] 0 containers: []
	W1212 21:11:23.541114    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:23.545562    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:23.575066    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:23.578842    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:23.611716    3672 logs.go:282] 0 containers: []
	W1212 21:11:23.611716    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:23.615041    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:23.650615    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:23.654853    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:23.693986    3672 logs.go:282] 0 containers: []
	W1212 21:11:23.693986    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:23.697429    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:23.732760    3672 logs.go:282] 0 containers: []
	W1212 21:11:23.732760    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:23.732760    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:23.732760    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:23.796336    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:23.796336    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:23.843841    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:23.843841    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:23.883805    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:23.883805    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:23.973751    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:23.973751    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:23.973751    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:24.023501    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:24.023501    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:24.070002    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:24.070002    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:24.112206    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:24.112206    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:24.143554    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:24.143554    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:26.696389    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:26.715385    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:26.746882    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:26.750677    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:26.778782    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:26.781785    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:26.813882    3672 logs.go:282] 0 containers: []
	W1212 21:11:26.813882    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:26.817689    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:26.848673    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:26.854720    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:26.886996    3672 logs.go:282] 0 containers: []
	W1212 21:11:26.886996    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:26.891002    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:26.926562    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:26.929574    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:26.966566    3672 logs.go:282] 0 containers: []
	W1212 21:11:26.966566    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:26.969565    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:26.997634    3672 logs.go:282] 0 containers: []
	W1212 21:11:26.997634    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:26.997634    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:26.997634    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:27.068323    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:27.068323    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:27.111317    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:27.111317    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:27.153568    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:27.153568    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:27.194388    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:27.194388    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:27.230387    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:27.230387    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:27.312493    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:27.312493    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:27.312493    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:27.357257    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:27.357257    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:27.403314    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:27.403366    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:29.956228    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:29.979530    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:30.014987    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:30.017948    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:30.048161    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:30.051702    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:30.080136    3672 logs.go:282] 0 containers: []
	W1212 21:11:30.080136    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:30.083135    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:30.111717    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:30.114710    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:30.141988    3672 logs.go:282] 0 containers: []
	W1212 21:11:30.141988    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:30.146464    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:30.177353    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:30.181323    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:30.208984    3672 logs.go:282] 0 containers: []
	W1212 21:11:30.208984    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:30.215364    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:30.248302    3672 logs.go:282] 0 containers: []
	W1212 21:11:30.248302    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:30.248302    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:30.248302    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:30.288280    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:30.288280    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:30.349412    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:30.349412    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:30.389740    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:30.389740    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:30.480446    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:30.480446    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:30.480446    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:30.525920    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:30.525920    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:30.569323    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:30.569323    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:30.600331    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:30.600331    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:30.650120    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:30.650192    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:33.202358    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:33.221359    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:33.258372    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:33.263379    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:33.297047    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:33.301693    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:33.330262    3672 logs.go:282] 0 containers: []
	W1212 21:11:33.330262    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:33.334137    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:33.376370    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:33.381357    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:33.411115    3672 logs.go:282] 0 containers: []
	W1212 21:11:33.411115    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:33.414793    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:33.447722    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:33.452729    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:33.486898    3672 logs.go:282] 0 containers: []
	W1212 21:11:33.486898    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:33.490897    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:33.522900    3672 logs.go:282] 0 containers: []
	W1212 21:11:33.522900    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:33.522900    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:33.522900    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:33.566702    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:33.567710    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:33.647710    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:33.647710    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:33.647710    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:33.705592    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:33.705592    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:33.785713    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:33.785713    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:33.827726    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:33.827726    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:33.867799    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:33.867799    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:33.926805    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:33.926805    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:34.003230    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:34.003230    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:36.542095    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:36.568388    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:11:36.610860    3672 logs.go:282] 1 containers: [a5f03d2822c4]
	I1212 21:11:36.614749    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:11:36.657729    3672 logs.go:282] 1 containers: [8e2fd5a5e17f]
	I1212 21:11:36.663201    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:11:36.718606    3672 logs.go:282] 0 containers: []
	W1212 21:11:36.719145    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:11:36.723103    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:11:36.770093    3672 logs.go:282] 1 containers: [6193b44cbfbe]
	I1212 21:11:36.773101    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:11:36.803095    3672 logs.go:282] 0 containers: []
	W1212 21:11:36.803095    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:11:36.808516    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:11:36.854874    3672 logs.go:282] 1 containers: [5952b5c79d6d]
	I1212 21:11:36.859339    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:11:36.895337    3672 logs.go:282] 0 containers: []
	W1212 21:11:36.895337    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:11:36.899339    3672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1212 21:11:36.931113    3672 logs.go:282] 0 containers: []
	W1212 21:11:36.931113    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:11:36.931113    3672 logs.go:123] Gathering logs for kube-scheduler [6193b44cbfbe] ...
	I1212 21:11:36.931113    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6193b44cbfbe"
	I1212 21:11:36.983673    3672 logs.go:123] Gathering logs for kube-controller-manager [5952b5c79d6d] ...
	I1212 21:11:36.983673    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5952b5c79d6d"
	I1212 21:11:37.031825    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:11:37.031825    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:11:37.080466    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:11:37.080466    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:11:37.160506    3672 logs.go:123] Gathering logs for etcd [8e2fd5a5e17f] ...
	I1212 21:11:37.160506    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2fd5a5e17f"
	I1212 21:11:37.205499    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:11:37.205499    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:11:37.235500    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:11:37.235500    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:11:37.279480    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:11:37.279480    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:11:37.370187    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:11:37.370292    3672 logs.go:123] Gathering logs for kube-apiserver [a5f03d2822c4] ...
	I1212 21:11:37.370292    3672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f03d2822c4"
	I1212 21:11:39.931602    3672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:39.959311    3672 kubeadm.go:602] duration metric: took 4m4.5553023s to restartPrimaryControlPlane
	W1212 21:11:39.959311    3672 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 21:11:39.965070    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:11:40.740339    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:11:40.773315    3672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:40.786323    3672 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:11:40.791311    3672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:40.804309    3672 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:40.804309    3672 kubeadm.go:158] found existing configuration files:
	
	I1212 21:11:40.808327    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:11:40.822078    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:11:40.827365    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:11:40.856488    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:11:40.873959    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:11:40.877967    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:11:40.893968    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:11:40.905964    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:11:40.909961    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:11:40.924959    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:11:40.937960    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:11:40.952163    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:11:40.992880    3672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:11:41.144668    3672 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:11:41.239411    3672 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:11:41.356420    3672 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:42.544932    3672 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:15:42.544932    3672 kubeadm.go:319] 
	I1212 21:15:42.544932    3672 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:15:42.547935    3672 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:15:42.547935    3672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:15:42.548948    3672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:15:42.548948    3672 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:15:42.548948    3672 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:15:42.548948    3672 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:15:42.548948    3672 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:15:42.548948    3672 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:15:42.548948    3672 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:15:42.549960    3672 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:15:42.549960    3672 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:15:42.549960    3672 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:15:42.549960    3672 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:15:42.549960    3672 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:15:42.550943    3672 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:15:42.551940    3672 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] OS: Linux
	I1212 21:15:42.552951    3672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:15:42.552951    3672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:15:42.553935    3672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:42.554949    3672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:42.554949    3672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:15:42.554949    3672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.556940    3672 out.go:252]   - Generating certificates and keys ...
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:42.557940    3672 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:42.558935    3672 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:42.558935    3672 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:42.558935    3672 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:42.558935    3672 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:42.559942    3672 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:15:42.559942    3672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:42.559942    3672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:42.559942    3672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:42.559942    3672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:42.559942    3672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:42.560936    3672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:42.560936    3672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:42.560936    3672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:42.563944    3672 out.go:252]   - Booting up control plane ...
	I1212 21:15:42.563944    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:42.563944    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:42.564935    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:42.564935    3672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:42.564935    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:15:42.565949    3672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:15:42.565949    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:42.565949    3672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:15:42.566952    3672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:15:42.566952    3672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:15:42.566952    3672 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000836204s
	I1212 21:15:42.566952    3672 kubeadm.go:319] 
	I1212 21:15:42.566952    3672 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:15:42.566952    3672 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:15:42.566952    3672 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:15:42.567934    3672 kubeadm.go:319] 
	I1212 21:15:42.567934    3672 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:15:42.567934    3672 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:15:42.567934    3672 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:15:42.567934    3672 kubeadm.go:319] 
	W1212 21:15:42.567934    3672 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000836204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000836204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:15:42.574939    3672 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:15:43.065588    3672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:43.090565    3672 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:15:43.095570    3672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:43.111571    3672 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:43.111571    3672 kubeadm.go:158] found existing configuration files:
	
	I1212 21:15:43.118578    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:15:43.136584    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:15:43.144581    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:15:43.172582    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:15:43.189570    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:15:43.193575    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:15:43.212117    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:15:43.228734    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:15:43.234734    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:15:43.260730    3672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:15:43.279743    3672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:15:43.286736    3672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:15:43.307745    3672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:15:43.462727    3672 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:15:43.553670    3672 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:15:43.671368    3672 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:19:44.615772    3672 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:19:44.615772    3672 kubeadm.go:319] 
	I1212 21:19:44.615772    3672 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:19:44.620011    3672 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:19:44.620011    3672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:19:44.620011    3672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:19:44.620011    3672 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:19:44.620547    3672 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:19:44.621192    3672 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:19:44.621277    3672 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:19:44.621374    3672 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:19:44.621929    3672 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:19:44.622170    3672 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] OS: Linux
	I1212 21:19:44.623341    3672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:19:44.623484    3672 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:19:44.623515    3672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:19:44.623621    3672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:19:44.624284    3672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:19:44.624419    3672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:19:44.624419    3672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:19:44.624419    3672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:19:44.701398    3672 out.go:252]   - Generating certificates and keys ...
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:19:44.703029    3672 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:19:44.703108    3672 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:19:44.703731    3672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:19:44.703821    3672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:19:44.703821    3672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:19:44.751684    3672 out.go:252]   - Booting up control plane ...
	I1212 21:19:44.751786    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:19:44.751786    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:19:44.752322    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:19:44.752440    3672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:19:44.752440    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:19:44.753089    3672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:19:44.753429    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:19:44.753634    3672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:19:44.754092    3672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:19:44.754359    3672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:19:44.754565    3672 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00021833s
	I1212 21:19:44.754612    3672 kubeadm.go:319] 
	I1212 21:19:44.754747    3672 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:19:44.754922    3672 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:19:44.755082    3672 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:19:44.755082    3672 kubeadm.go:319] 
	I1212 21:19:44.755289    3672 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:19:44.755289    3672 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:19:44.755289    3672 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:19:44.755289    3672 kubeadm.go:319] 
	I1212 21:19:44.755289    3672 kubeadm.go:403] duration metric: took 12m9.4078185s to StartCluster
	I1212 21:19:44.755289    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:19:44.760820    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:19:44.824660    3672 cri.go:89] found id: ""
	I1212 21:19:44.824660    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.824660    3672 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:19:44.824660    3672 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:19:44.829837    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:19:44.878307    3672 cri.go:89] found id: ""
	I1212 21:19:44.878307    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.878307    3672 logs.go:284] No container was found matching "etcd"
	I1212 21:19:44.878307    3672 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:19:44.883779    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:19:44.950014    3672 cri.go:89] found id: ""
	I1212 21:19:44.950014    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.950014    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:19:44.950014    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:19:44.955259    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:19:45.002810    3672 cri.go:89] found id: ""
	I1212 21:19:45.002810    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.002810    3672 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:19:45.002810    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:19:45.009047    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:19:45.053837    3672 cri.go:89] found id: ""
	I1212 21:19:45.053880    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.053880    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:19:45.053880    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:19:45.058452    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:19:45.107565    3672 cri.go:89] found id: ""
	I1212 21:19:45.107565    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.107565    3672 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:19:45.107565    3672 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:19:45.114689    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:19:45.160435    3672 cri.go:89] found id: ""
	I1212 21:19:45.160435    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.160435    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:19:45.160435    3672 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:19:45.164686    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:19:45.207166    3672 cri.go:89] found id: ""
	I1212 21:19:45.207166    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.207166    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:19:45.207166    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:19:45.208179    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:19:45.284363    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:19:45.284439    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:19:45.329298    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:19:45.329298    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:19:45.425787    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:19:45.426341    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:19:45.426389    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:19:45.457222    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:19:45.457222    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:19:45.509039    3672 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:19:45.509579    3672 out.go:285] * 
	* 
	W1212 21:19:45.509702    3672 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:19:45.509989    3672 out.go:285] * 
	* 
	W1212 21:19:45.513435    3672 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:19:45.565706    3672 out.go:203] 
	W1212 21:19:45.605838    3672 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:19:45.605838    3672 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:19:45.605838    3672 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:19:45.618910    3672 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-716700 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-716700 version --output=json
E1212 21:19:54.833326   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-716700 version --output=json: exit status 1 (10.1760718s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.3",
	    "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-09T15:06:39Z",
	    "goVersion": "go1.24.11",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-12 21:19:57.1601458 +0000 UTC m=+6662.803747201
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-716700
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-716700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682",
	        "Created": "2025-12-12T21:06:21.818924756Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:07:07.617268224Z",
	            "FinishedAt": "2025-12-12T21:07:04.899258177Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682/hostname",
	        "HostsPath": "/var/lib/docker/containers/3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682/hosts",
	        "LogPath": "/var/lib/docker/containers/3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682/3949df5bdcd34056c6b66cee6db19661cf9f05b41103365f9d8b6e13be6ec682-json.log",
	        "Name": "/kubernetes-upgrade-716700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-716700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-716700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16d9038364b92fe7bdd66b409ae027b2277cae2a0e13befd7c87bf12a410e7ff-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16d9038364b92fe7bdd66b409ae027b2277cae2a0e13befd7c87bf12a410e7ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16d9038364b92fe7bdd66b409ae027b2277cae2a0e13befd7c87bf12a410e7ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16d9038364b92fe7bdd66b409ae027b2277cae2a0e13befd7c87bf12a410e7ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-716700",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-716700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-716700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-716700",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-716700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc73f30cd47b7eb1b98f896b0b9f245eefbc6e56792bb183e2fe417de1a970b",
	            "SandboxKey": "/var/run/docker/netns/ccc73f30cd47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60367"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60368"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60369"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-716700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ce56d0e87fbd6ba617d21414225132606253b2dcfb5a201e88392df886053dc8",
	                    "EndpointID": "b04f2d8e34a90e7cd528c31f5b97cfc969894817219bda849727756602081706",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-716700",
	                        "3949df5bdcd3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-716700 -n kubernetes-upgrade-716700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-716700 -n kubernetes-upgrade-716700: exit status 2 (612.3407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-716700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-716700 logs -n 25: (2.8409528s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                        ARGS                                                                                                         │        PROFILE         │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-864500 sudo systemctl cat kubelet --no-pager                                                                                                                                                              │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                               │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                              │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                              │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl status docker --all --full --no-pager                                                                                                                                               │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl cat docker --no-pager                                                                                                                                                               │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ start   │ -p old-k8s-version-246400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0 │ old-k8s-version-246400 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │                     │
	│ ssh     │ -p calico-864500 sudo cat /etc/docker/daemon.json                                                                                                                                                                   │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo docker system info                                                                                                                                                                            │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                           │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl cat cri-docker --no-pager                                                                                                                                                           │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                      │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cri-dockerd --version                                                                                                                                                                         │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl status containerd --all --full --no-pager                                                                                                                                           │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl cat containerd --no-pager                                                                                                                                                           │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                    │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo cat /etc/containerd/config.toml                                                                                                                                                               │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo containerd config dump                                                                                                                                                                        │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo systemctl status crio --all --full --no-pager                                                                                                                                                 │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │                     │
	│ ssh     │ -p calico-864500 sudo systemctl cat crio --no-pager                                                                                                                                                                 │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                       │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ ssh     │ -p calico-864500 sudo crio config                                                                                                                                                                                   │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:18 UTC │
	│ delete  │ -p calico-864500                                                                                                                                                                                                    │ calico-864500          │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:18 UTC │ 12 Dec 25 21:19 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-285600      │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:19:11
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:19:11.785260   11500 out.go:360] Setting OutFile to fd 1476 ...
	I1212 21:19:11.829252   11500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:19:11.829252   11500 out.go:374] Setting ErrFile to fd 1332...
	I1212 21:19:11.829252   11500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:19:11.844255   11500 out.go:368] Setting JSON to false
	I1212 21:19:11.847260   11500 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8489,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:19:11.847260   11500 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:19:11.852259   11500 out.go:179] * [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:19:11.857363   11500 notify.go:221] Checking for updates...
	I1212 21:19:11.859315   11500 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:19:11.861298   11500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:19:11.865443   11500 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:19:11.868158   11500 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:19:11.875325   11500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:19:11.878314   11500 config.go:182] Loaded profile config "false-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:19:11.879318   11500 config.go:182] Loaded profile config "kubernetes-upgrade-716700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:19:11.879318   11500 config.go:182] Loaded profile config "old-k8s-version-246400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1212 21:19:11.879318   11500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:19:12.008386   11500 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:19:12.011393   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:12.299490   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:12.280377687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:12.304502   11500 out.go:179] * Using the docker driver based on user configuration
	I1212 21:19:12.307505   11500 start.go:309] selected driver: docker
	I1212 21:19:12.308491   11500 start.go:927] validating driver "docker" against <nil>
	I1212 21:19:12.308491   11500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:19:12.359790   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:12.647149   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:12.620326662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:12.647149   11500 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 21:19:12.648155   11500 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:19:12.650152   11500 out.go:179] * Using Docker Desktop driver with root privileges
	I1212 21:19:12.654146   11500 cni.go:84] Creating CNI manager for ""
	I1212 21:19:12.654146   11500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:12.654146   11500 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 21:19:12.654146   11500 start.go:353] cluster config:
	{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:12.658154   11500 out.go:179] * Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	I1212 21:19:12.661149   11500 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:19:12.665145   11500 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:19:12.668154   11500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:19:12.668154   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:19:12.669156   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json: {Name:mka3f24491318cc00f75a0705eb5398b2088bad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:13.078698   11500 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:19:13.078698   11500 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:19:13.078698   11500 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:19:13.078698   11500 start.go:360] acquireMachinesLock for no-preload-285600: {Name:mk2731f875a3a62f76017c58cc7d43a1bb1f8ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:13.078698   11500 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-285600"
	I1212 21:19:13.078698   11500 start.go:93] Provisioning new machine with config: &{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:19:13.078698   11500 start.go:125] createHost starting for "" (driver="docker")
	I1212 21:19:09.364288   11652 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:19:09.369074   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:19:09.414148   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:09.436928   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:19:09.460562   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:09.485774   11652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:19:09.509096   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:19:09.528091   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:19:09.547118   11652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:19:09.581103   11652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:19:09.612105   11652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:19:09.632099   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:09.785728   11652 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:19:10.009135   11652 start.go:496] detecting cgroup driver to use...
	I1212 21:19:10.009135   11652 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:10.016145   11652 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:19:10.047139   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:10.077133   11652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:19:10.145152   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:10.169309   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:19:10.187296   11652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:10.213306   11652 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:19:10.225311   11652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:19:10.240303   11652 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:19:10.263297   11652 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:19:10.403864   11652 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:19:10.544064   11652 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:19:10.544695   11652 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:19:10.570920   11652 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:19:10.590928   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:10.740538   11652 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:19:12.256491   11652 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5159291s)
	I1212 21:19:12.263494   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:19:12.289486   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:19:12.317541   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:12.348790   11652 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:19:12.549553   11652 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:19:12.839341   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:13.259273   11652 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:19:13.402546   11652 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:19:13.538226   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:14.010581   11652 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:19:14.304653   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:14.377781   11652 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:19:14.385547   11652 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:19:14.420408   11652 start.go:564] Will wait 60s for crictl version
	I1212 21:19:14.430325   11652 ssh_runner.go:195] Run: which crictl
	I1212 21:19:14.865056   11652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:19:14.990763   11652 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:19:14.996024   11652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:15.099697   11652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:11.551290    2276 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-246400 --name old-k8s-version-246400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-246400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-246400 --network old-k8s-version-246400 --ip 192.168.112.2 --volume old-k8s-version-246400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138: (1.5451255s)
	I1212 21:19:11.555287    2276 cli_runner.go:164] Run: docker container inspect old-k8s-version-246400 --format={{.State.Running}}
	I1212 21:19:11.612291    2276 cli_runner.go:164] Run: docker container inspect old-k8s-version-246400 --format={{.State.Status}}
	I1212 21:19:11.666293    2276 cli_runner.go:164] Run: docker exec old-k8s-version-246400 stat /var/lib/dpkg/alternatives/iptables
	I1212 21:19:11.777251    2276 oci.go:144] the created container "old-k8s-version-246400" has a running status.
	I1212 21:19:11.777251    2276 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa...
	I1212 21:19:11.904396    2276 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 21:19:11.983398    2276 cli_runner.go:164] Run: docker container inspect old-k8s-version-246400 --format={{.State.Status}}
	I1212 21:19:12.043392    2276 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 21:19:12.043392    2276 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-246400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 21:19:12.229502    2276 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa...
	I1212 21:19:13.082699   11500 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 21:19:13.082699   11500 start.go:159] libmachine.API.Create for "no-preload-285600" (driver="docker")
	I1212 21:19:13.082699   11500 client.go:173] LocalClient.Create starting
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Decoding PEM data...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Parsing certificate...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Decoding PEM data...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Parsing certificate...
	I1212 21:19:13.090686   11500 cli_runner.go:164] Run: docker network inspect no-preload-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 21:19:13.190881   11500 cli_runner.go:211] docker network inspect no-preload-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 21:19:13.196895   11500 network_create.go:284] running [docker network inspect no-preload-285600] to gather additional debugging logs...
	I1212 21:19:13.196895   11500 cli_runner.go:164] Run: docker network inspect no-preload-285600
	W1212 21:19:15.028733   11500 cli_runner.go:211] docker network inspect no-preload-285600 returned with exit code 1
	I1212 21:19:15.028733   11500 cli_runner.go:217] Completed: docker network inspect no-preload-285600: (1.8318087s)
	I1212 21:19:15.029266   11500 network_create.go:287] error running [docker network inspect no-preload-285600]: docker network inspect no-preload-285600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-285600 not found
	I1212 21:19:15.029320   11500 network_create.go:289] output of [docker network inspect no-preload-285600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-285600 not found
	
	** /stderr **
	I1212 21:19:15.035947   11500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:19:15.193358   11500 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.224489   11500 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.271604   11500 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.302308   11500 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.349210   11500 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.401607   11500 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.440870   11500 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.472180   11500 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.516010   11500 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9cdb0}
	I1212 21:19:15.516585   11500 network_create.go:124] attempt to create docker network no-preload-285600 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1212 21:19:15.521595   11500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-285600 no-preload-285600
	I1212 21:19:15.789772   11500 network_create.go:108] docker network no-preload-285600 192.168.121.0/24 created
	I1212 21:19:15.789772   11500 kic.go:121] calculated static IP "192.168.121.2" for the "no-preload-285600" container
	I1212 21:19:15.809008   11500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 21:19:15.897795   11500 cli_runner.go:164] Run: docker volume create no-preload-285600 --label name.minikube.sigs.k8s.io=no-preload-285600 --label created_by.minikube.sigs.k8s.io=true
	I1212 21:19:15.990942   11500 oci.go:103] Successfully created a docker volume no-preload-285600
	I1212 21:19:15.996535   11500 cli_runner.go:164] Run: docker run --rm --name no-preload-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --entrypoint /usr/bin/test -v no-preload-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 21:19:16.240677   11500 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.240677   11500 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:16.248671   11500 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.249673   11500 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:16.254674   11500 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:16.260668   11500 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:16.264673   11500 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.264673   11500 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:16.283861   11500 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:16.313574   11500 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.313574   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1212 21:19:16.314607   11500 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.6453931s
	I1212 21:19:16.314607   11500 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1212 21:19:16.330885   11500 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.331701   11500 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	W1212 21:19:16.339638   11500 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.348041   11500 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:16.381931   11500 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.381988   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1212 21:19:16.381988   11500 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.7127735s
	I1212 21:19:16.381988   11500 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1212 21:19:16.390465   11500 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.390857   11500 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:16.401875   11500 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1212 21:19:16.415849   11500 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.449852   11500 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.450860   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1212 21:19:16.450860   11500 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.7816444s
	I1212 21:19:16.450860   11500 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	W1212 21:19:16.481851   11500 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:16.548106   11500 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:16.615682   11500 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.761862   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:16.775859   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:16.797889   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:16.824668   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:15.178204   11652 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1212 21:19:15.185804   11652 cli_runner.go:164] Run: docker exec -t false-864500 dig +short host.docker.internal
	I1212 21:19:15.506614   11652 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:19:15.515080   11652 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:19:15.533595   11652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:15.562586   11652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-864500
	I1212 21:19:15.696259   11652 kubeadm.go:884] updating cluster {Name:false-864500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-864500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:19:15.696259   11652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:19:15.702307   11652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:15.750569   11652 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:19:15.750569   11652 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:19:15.761686   11652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:15.814439   11652 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:19:15.814439   11652 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:19:15.815447   11652 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1212 21:19:15.815447   11652 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=false-864500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:false-864500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false}
	I1212 21:19:15.821438   11652 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:19:15.949097   11652 cni.go:84] Creating CNI manager for "false"
	I1212 21:19:15.949097   11652 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:19:15.949097   11652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-864500 NodeName:false-864500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:19:15.950545   11652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "false-864500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:19:15.958521   11652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:19:15.986433   11652 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:19:15.993529   11652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:19:16.019752   11652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 21:19:16.054908   11652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:19:16.090331   11652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1212 21:19:16.136311   11652 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:19:16.149787   11652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:16.186126   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:16.425855   11652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:19:16.460856   11652 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500 for IP: 192.168.103.2
	I1212 21:19:16.460856   11652 certs.go:195] generating shared ca certs ...
	I1212 21:19:16.460856   11652 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.460856   11652 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:19:16.461869   11652 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:19:16.461869   11652 certs.go:257] generating profile certs ...
	I1212 21:19:16.461869   11652 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.key
	I1212 21:19:16.462852   11652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.crt with IP's: []
	I1212 21:19:16.566689   11652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.crt ...
	I1212 21:19:16.566689   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.crt: {Name:mk40cb68cfb4dfa411ee9313fff530cc998ce19c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.567684   11652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.key ...
	I1212 21:19:16.567684   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\client.key: {Name:mk4617df7c570cc63a785782f19d84ad2216cf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.568684   11652 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key.3b717fea
	I1212 21:19:16.569687   11652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt.3b717fea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 21:19:16.748855   11652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt.3b717fea ...
	I1212 21:19:16.748855   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt.3b717fea: {Name:mk4baa586715d09778a8b94a81f24cb9f33d714d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.749860   11652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key.3b717fea ...
	I1212 21:19:16.749860   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key.3b717fea: {Name:mk3d1ddc488b2fac6bbde7ee04c41d974a134db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.749860   11652 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt.3b717fea -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt
	I1212 21:19:16.764860   11652 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key.3b717fea -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key
	I1212 21:19:16.765862   11652 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.key
	I1212 21:19:16.765862   11652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.crt with IP's: []
	I1212 21:19:16.893986   11652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.crt ...
	I1212 21:19:16.893986   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.crt: {Name:mk0e9afb7c62b4de641b620b63034ac096e4e3e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.894980   11652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.key ...
	I1212 21:19:16.894980   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.key: {Name:mk216e9526309240911ca3a6fc473f647dfd7625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:16.908974   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:19:16.909983   11652 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:19:16.909983   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:19:16.909983   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:19:16.909983   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:19:16.909983   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:19:16.910975   11652 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:19:16.911979   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:19:16.940983   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:19:17.029674   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:19:17.112314   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:19:17.158045   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 21:19:17.195671   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:19:17.223662   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:19:17.451240   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-864500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:19:17.482595   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:19:17.517979   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:19:17.567828   11652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:19:17.602329   11652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:19:17.627332   11652 ssh_runner.go:195] Run: openssl version
	I1212 21:19:17.652341   11652 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:19:17.683330   11652 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:19:17.708347   11652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:19:17.716327   11652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:19:17.720338   11652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:19:17.773332   11652 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:19:17.790328   11652 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 21:19:17.807330   11652 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:19:17.824332   11652 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:19:17.840340   11652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:19:17.847329   11652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:19:17.851329   11652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:19:17.940917   11652 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:17.960924   11652 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:17.979916   11652 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:17.997913   11652 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:19:18.017912   11652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:18.027919   11652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:18.031915   11652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:18.108924   11652 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:19:18.128914   11652 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 21:19:18.147915   11652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:19:18.155929   11652 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:19:18.156943   11652 kubeadm.go:401] StartCluster: {Name:false-864500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-864500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:18.159922   11652 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:19:18.211069   11652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:19:18.227070   11652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:19:18.240078   11652 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:19:18.244075   11652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:19:18.258084   11652 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:19:18.258084   11652 kubeadm.go:158] found existing configuration files:
	
	I1212 21:19:18.262082   11652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:19:18.276086   11652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:19:18.281087   11652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:19:18.305080   11652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:19:18.319099   11652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:19:18.324965   11652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:19:18.347176   11652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:19:18.360171   11652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:19:18.364176   11652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:19:18.385175   11652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:19:18.398180   11652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:19:18.402171   11652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:19:18.418174   11652 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:19:18.575067   11652 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:19:18.579075   11652 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 21:19:18.685188   11652 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:19:16.405851    2276 cli_runner.go:164] Run: docker container inspect old-k8s-version-246400 --format={{.State.Status}}
	I1212 21:19:16.470849    2276 machine.go:94] provisionDockerMachine start ...
	I1212 21:19:16.474848    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:16.542854    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:16.558681    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:16.558681    2276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:19:16.741859    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-246400
	
	I1212 21:19:16.741859    2276 ubuntu.go:182] provisioning hostname "old-k8s-version-246400"
	I1212 21:19:16.746854    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:16.805578    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:16.806243    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:16.806296    2276 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-246400 && echo "old-k8s-version-246400" | sudo tee /etc/hostname
	I1212 21:19:17.092776    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-246400
	
	I1212 21:19:17.097726    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:17.167429    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:17.167637    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:17.167637    2276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-246400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-246400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-246400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:19:17.511349    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:19:17.511400    2276 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:19:17.511455    2276 ubuntu.go:190] setting up certificates
	I1212 21:19:17.511506    2276 provision.go:84] configureAuth start
	I1212 21:19:17.517166    2276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-246400
	I1212 21:19:17.586346    2276 provision.go:143] copyHostCerts
	I1212 21:19:17.586346    2276 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:19:17.586346    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:19:17.587347    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:19:17.588335    2276 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:19:17.588335    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:19:17.589333    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:19:17.590333    2276 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:19:17.590333    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:19:17.590333    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:19:17.591331    2276 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-246400 san=[127.0.0.1 192.168.112.2 localhost minikube old-k8s-version-246400]
	I1212 21:19:17.686336    2276 provision.go:177] copyRemoteCerts
	I1212 21:19:17.693363    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:19:17.697335    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:17.747337    2276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62107 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa Username:docker}
	I1212 21:19:17.878344    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:19:17.922915    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:19:17.959927    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:19:17.992925    2276 provision.go:87] duration metric: took 481.4114ms to configureAuth
	I1212 21:19:17.992925    2276 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:19:17.993918    2276 config.go:182] Loaded profile config "old-k8s-version-246400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1212 21:19:17.997913    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:18.062910    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:18.063912    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:18.063912    2276 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:19:18.242075    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:19:18.243077    2276 ubuntu.go:71] root file system type: overlay
	I1212 21:19:18.243077    2276 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:19:18.246081    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:18.305080    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:18.306085    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:18.306085    2276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:19:18.526098    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:19:18.531078    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:18.586067    2276 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:18.587068    2276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62107 <nil> <nil>}
	I1212 21:19:18.587068    2276 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:19:16.858721   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:17.584332   11500 cli_runner.go:217] Completed: docker run --rm --name no-preload-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --entrypoint /usr/bin/test -v no-preload-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.5877722s)
	I1212 21:19:17.584332   11500 oci.go:107] Successfully prepared a docker volume no-preload-285600
	I1212 21:19:17.584332   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:17.588335   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:17.704335   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1212 21:19:17.704335   11500 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 5.0350991s
	I1212 21:19:17.704335   11500 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1212 21:19:17.833342   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:17.815740068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:17.836342   11500 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 21:19:18.101913   11500 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-285600 --name no-preload-285600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-285600 --network no-preload-285600 --ip 192.168.121.2 --volume no-preload-285600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 21:19:18.418174   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1212 21:19:18.418174   11500 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.7489275s
	I1212 21:19:18.418174   11500 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 21:19:18.437458   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1212 21:19:18.437458   11500 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.7682114s
	I1212 21:19:18.437458   11500 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1212 21:19:18.645105   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1212 21:19:18.645105   11500 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.9758542s
	I1212 21:19:18.645105   11500 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 21:19:18.699585   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1212 21:19:18.699755   11500 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 6.0305038s
	I1212 21:19:18.699755   11500 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 21:19:18.699755   11500 cache.go:87] Successfully saved all images to host disk.
	I1212 21:19:18.814270   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Running}}
	I1212 21:19:18.878263   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:18.942266   11500 cli_runner.go:164] Run: docker exec no-preload-285600 stat /var/lib/dpkg/alternatives/iptables
	I1212 21:19:19.064845   11500 oci.go:144] the created container "no-preload-285600" has a running status.
	I1212 21:19:19.064845   11500 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa...
	I1212 21:19:19.101842   11500 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 21:19:19.181163   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:19.241174   11500 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 21:19:19.241174   11500 kic_runner.go:114] Args: [docker exec --privileged no-preload-285600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 21:19:19.361414   11500 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa...
	I1212 21:19:21.556540   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:21.609928   11500 machine.go:94] provisionDockerMachine start ...
	I1212 21:19:21.612924   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:21.672687   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:21.686901   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:21.686901   11500 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:19:22.440668    2276 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-12 21:19:18.517818415 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 21:19:22.440668    2276 machine.go:97] duration metric: took 5.9697247s to provisionDockerMachine
	I1212 21:19:22.440668    2276 client.go:176] duration metric: took 36.4407102s to LocalClient.Create
	I1212 21:19:22.440668    2276 start.go:167] duration metric: took 36.4407102s to libmachine.API.Create "old-k8s-version-246400"
	I1212 21:19:22.440668    2276 start.go:293] postStartSetup for "old-k8s-version-246400" (driver="docker")
	I1212 21:19:22.440668    2276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:19:22.445668    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:19:22.448667    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:22.504302    2276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62107 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa Username:docker}
	I1212 21:19:22.642663    2276 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:19:22.650268    2276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:19:22.650311    2276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:19:22.650311    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:19:22.650623    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:19:22.651244    2276 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:19:22.656459    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:19:22.671747    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:19:22.700606    2276 start.go:296] duration metric: took 259.9337ms for postStartSetup
	I1212 21:19:22.708782    2276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-246400
	I1212 21:19:22.758433    2276 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\config.json ...
	I1212 21:19:22.763435    2276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:19:22.767433    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:22.821470    2276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62107 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa Username:docker}
	I1212 21:19:22.945131    2276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:19:22.954132    2276 start.go:128] duration metric: took 36.9571608s to createHost
	I1212 21:19:22.954132    2276 start.go:83] releasing machines lock for "old-k8s-version-246400", held for 36.9571608s
	I1212 21:19:22.958119    2276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-246400
	I1212 21:19:23.014119    2276 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:19:23.018119    2276 ssh_runner.go:195] Run: cat /version.json
	I1212 21:19:23.018119    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:23.021120    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:23.077119    2276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62107 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa Username:docker}
	I1212 21:19:23.078119    2276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62107 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-246400\id_rsa Username:docker}
	I1212 21:19:23.202688    2276 ssh_runner.go:195] Run: systemctl --version
	W1212 21:19:23.207651    2276 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:19:23.216344    2276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:19:23.229386    2276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:19:23.235739    2276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:19:23.289129    2276 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:19:23.289129    2276 start.go:496] detecting cgroup driver to use...
	I1212 21:19:23.289129    2276 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:23.289129    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1212 21:19:23.306178    2276 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:19:23.306178    2276 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:19:23.318811    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 21:19:23.336814    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:19:23.350814    2276 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:19:23.355834    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:19:23.386396    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:23.410358    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:19:23.431173    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:23.449173    2276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:19:23.466159    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:19:23.483165    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:19:23.501163    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:19:23.518164    2276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:19:23.533169    2276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:19:23.548159    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:23.684006    2276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:19:23.858732    2276 start.go:496] detecting cgroup driver to use...
	I1212 21:19:23.858732    2276 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:23.864262    2276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:19:23.891825    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:23.913251    2276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:19:23.955080    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:23.982742    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:19:24.001415    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:24.031099    2276 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:19:24.041300    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:19:24.057132    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 21:19:24.082162    2276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:19:24.244890    2276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:19:24.398551    2276 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:19:24.399135    2276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:19:24.425679    2276 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:19:24.450370    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:24.592818    2276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:19:25.586006    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:19:25.613439    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:19:25.638685    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:25.663010    2276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:19:25.807624    2276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:19:25.962847    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:26.176078    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:19:26.203850    2276 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:19:26.226429    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:26.381714    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:19:26.506851    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:26.525298    2276 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:19:26.531346    2276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:19:26.540490    2276 start.go:564] Will wait 60s for crictl version
	I1212 21:19:26.544473    2276 ssh_runner.go:195] Run: which crictl
	I1212 21:19:26.556466    2276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:19:26.598816    2276 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:19:26.603641    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:26.653554    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:21.861446   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:19:21.861446   11500 ubuntu.go:182] provisioning hostname "no-preload-285600"
	I1212 21:19:21.864807   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:21.925344   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:21.925344   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:21.925344   11500 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-285600 && echo "no-preload-285600" | sudo tee /etc/hostname
	I1212 21:19:22.111736   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:19:22.114749   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.166737   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:22.167742   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:22.167742   11500 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:19:22.351585   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:19:22.351585   11500 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:19:22.351635   11500 ubuntu.go:190] setting up certificates
	I1212 21:19:22.351690   11500 provision.go:84] configureAuth start
	I1212 21:19:22.354709   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:22.410676   11500 provision.go:143] copyHostCerts
	I1212 21:19:22.410676   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:19:22.411683   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:19:22.411683   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:19:22.412667   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:19:22.412667   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:19:22.412667   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:19:22.413685   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:19:22.413685   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:19:22.413685   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:19:22.414669   11500 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-285600 san=[127.0.0.1 192.168.121.2 localhost minikube no-preload-285600]
	I1212 21:19:22.570511   11500 provision.go:177] copyRemoteCerts
	I1212 21:19:22.575186   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:19:22.578325   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.636170   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:22.774439   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:19:22.810478   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:19:22.841287   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:19:22.872627   11500 provision.go:87] duration metric: took 520.9288ms to configureAuth
	I1212 21:19:22.872627   11500 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:19:22.872627   11500 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:19:22.875628   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.929627   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:22.929627   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:22.929627   11500 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:19:23.103684   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:19:23.103684   11500 ubuntu.go:71] root file system type: overlay
	I1212 21:19:23.104289   11500 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:19:23.109961   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:23.165648   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:23.166666   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:23.166666   11500 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:19:23.350814   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:19:23.355834   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:23.422178   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:23.422178   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:23.422178   11500 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:19:24.755983   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-12 21:19:23.338300308 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 21:19:24.755983   11500 machine.go:97] duration metric: took 3.1460049s to provisionDockerMachine
	I1212 21:19:24.755983   11500 client.go:176] duration metric: took 11.6730993s to LocalClient.Create
	I1212 21:19:24.755983   11500 start.go:167] duration metric: took 11.6730993s to libmachine.API.Create "no-preload-285600"
	I1212 21:19:24.755983   11500 start.go:293] postStartSetup for "no-preload-285600" (driver="docker")
	I1212 21:19:24.755983   11500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:19:24.759925   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:19:24.762906   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:24.814451   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:24.943802   11500 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:19:24.951956   11500 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:19:24.951956   11500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:19:24.951956   11500 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:19:24.951956   11500 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:19:24.953335   11500 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:19:24.958227   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:19:24.973362   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:19:25.005913   11500 start.go:296] duration metric: took 249.9267ms for postStartSetup
	I1212 21:19:25.010921   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:25.070030   11500 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:19:25.076026   11500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:19:25.079022   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.136022   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:25.258665   11500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:19:25.270120   11500 start.go:128] duration metric: took 12.1912296s to createHost
	I1212 21:19:25.270120   11500 start.go:83] releasing machines lock for "no-preload-285600", held for 12.1912296s
	I1212 21:19:25.273729   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:25.329383   11500 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:19:25.333382   11500 ssh_runner.go:195] Run: cat /version.json
	I1212 21:19:25.333382   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.336386   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.387387   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:25.389394   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	W1212 21:19:25.499964   11500 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:19:25.514068   11500 ssh_runner.go:195] Run: systemctl --version
	I1212 21:19:25.530410   11500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:19:25.539419   11500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:19:25.544827   11500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1212 21:19:25.585530   11500 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:19:25.585580   11500 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:19:25.618893   11500 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:19:25.618893   11500 start.go:496] detecting cgroup driver to use...
	I1212 21:19:25.618893   11500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:25.618893   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:25.645975   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:19:25.664008   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:19:25.678013   11500 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:19:25.682009   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:19:25.700007   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:25.721787   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:19:25.743166   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:25.766435   11500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:19:25.784623   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:19:25.802637   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:19:25.820626   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:19:25.839624   11500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:19:25.860985   11500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:19:25.876598   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:26.043670   11500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:19:26.260282   11500 start.go:496] detecting cgroup driver to use...
	I1212 21:19:26.260282   11500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:26.265266   11500 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:19:26.291525   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:26.318524   11500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:19:26.403712   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:26.429934   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:19:26.448819   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:26.477300   11500 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:19:26.488903   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:19:26.508895   11500 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:19:26.536014   11500 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:19:26.693194   11500 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:19:26.860928   11500 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:19:26.861148   11500 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:19:26.885980   11500 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:19:26.908995   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:27.076743   11500 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:19:28.078996   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:19:28.102938   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:19:28.127357   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:28.155347   11500 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:19:28.315684   11500 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:19:28.469677   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:28.625339   11500 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:19:28.650343   11500 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:19:28.671957   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:28.821662   11500 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:19:28.940893   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:28.962463   11500 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:19:28.967843   11500 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:19:28.976555   11500 start.go:564] Will wait 60s for crictl version
	I1212 21:19:28.980749   11500 ssh_runner.go:195] Run: which crictl
	I1212 21:19:28.993035   11500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:19:29.038667   11500 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:19:29.041667   11500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:29.095509   11500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:26.694340    2276 out.go:252] * Preparing Kubernetes v1.28.0 on Docker 29.1.2 ...
	I1212 21:19:26.697770    2276 cli_runner.go:164] Run: docker exec -t old-k8s-version-246400 dig +short host.docker.internal
	I1212 21:19:26.845374    2276 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:19:26.849345    2276 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:19:26.856555    2276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:26.876986    2276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-246400
	I1212 21:19:26.933991    2276 kubeadm.go:884] updating cluster {Name:old-k8s-version-246400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-246400 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:19:26.934990    2276 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1212 21:19:26.938986    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:26.968106    2276 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:19:26.968106    2276 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:19:26.971102    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:27.005563    2276 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:19:27.005563    2276 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:19:27.005563    2276 kubeadm.go:935] updating node { 192.168.112.2 8443 v1.28.0 docker true true} ...
	I1212 21:19:27.005563    2276 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-246400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-246400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:19:27.009795    2276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:19:27.095155    2276 cni.go:84] Creating CNI manager for ""
	I1212 21:19:27.095155    2276 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:27.095155    2276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:19:27.095155    2276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-246400 NodeName:old-k8s-version-246400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:19:27.095155    2276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "old-k8s-version-246400"
	  kubeletExtraArgs:
	    node-ip: 192.168.112.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:19:27.099843    2276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1212 21:19:27.116766    2276 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:19:27.121766    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:19:27.135774    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1212 21:19:27.156766    2276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:19:27.176719    2276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1212 21:19:27.203582    2276 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:19:27.212609    2276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:27.235593    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:27.357209    2276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:19:27.378211    2276 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400 for IP: 192.168.112.2
	I1212 21:19:27.379216    2276 certs.go:195] generating shared ca certs ...
	I1212 21:19:27.379216    2276 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.379216    2276 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:19:27.379216    2276 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:19:27.379216    2276 certs.go:257] generating profile certs ...
	I1212 21:19:27.380216    2276 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.key
	I1212 21:19:27.380216    2276 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.crt with IP's: []
	I1212 21:19:27.482398    2276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.crt ...
	I1212 21:19:27.482398    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.crt: {Name:mk4774d5e456a0bf00e4f7b0f8e22379800572a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.483389    2276 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.key ...
	I1212 21:19:27.483389    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\client.key: {Name:mkee163a8f301d29bdd649d42d1b3196ac688d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.484391    2276 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key.f6ccbe8f
	I1212 21:19:27.484391    2276 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt.f6ccbe8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.112.2]
	I1212 21:19:27.599833    2276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt.f6ccbe8f ...
	I1212 21:19:27.599833    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt.f6ccbe8f: {Name:mk2bf936b71374f98625b42ca2d07b7e44e2b42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.600954    2276 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key.f6ccbe8f ...
	I1212 21:19:27.600954    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key.f6ccbe8f: {Name:mkc75e6b2d9a6d47d05dc2edb78729bd69527046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.601523    2276 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt.f6ccbe8f -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt
	I1212 21:19:27.620610    2276 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key.f6ccbe8f -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key
	I1212 21:19:27.621690    2276 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.key
	I1212 21:19:27.622267    2276 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.crt with IP's: []
	I1212 21:19:27.710181    2276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.crt ...
	I1212 21:19:27.710181    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.crt: {Name:mk10785272c77547db18becc089e8b8fb2f3a23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.710975    2276 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.key ...
	I1212 21:19:27.710975    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.key: {Name:mk49494075b0a2d1363f44b0d96244ff0044cbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:27.725481    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:19:27.726581    2276 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:19:27.726581    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:19:27.726889    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:19:27.727049    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:19:27.727049    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:19:27.727642    2276 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:19:27.727841    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:19:27.764241    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:19:27.792294    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:19:27.824028    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:19:27.856738    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 21:19:27.888779    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:19:27.924319    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:19:27.956997    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-246400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:19:27.984194    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:19:28.011836    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:19:28.045538    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:19:28.078996    2276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:19:28.107760    2276 ssh_runner.go:195] Run: openssl version
	I1212 21:19:28.120355    2276 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:19:28.139357    2276 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:19:28.156350    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:19:28.163349    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:19:28.166346    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:19:28.220899    2276 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:19:28.240473    2276 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 21:19:28.262570    2276 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:19:28.280687    2276 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:19:28.296682    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:19:28.303691    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:19:28.308682    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:19:28.356496    2276 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:28.375521    2276 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:28.398995    2276 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:28.416985    2276 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:19:28.435993    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:28.447667    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:28.451665    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:28.499870    2276 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:19:28.517882    2276 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 21:19:28.533892    2276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:19:28.541894    2276 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:19:28.541894    2276 kubeadm.go:401] StartCluster: {Name:old-k8s-version-246400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-246400 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:28.548120    2276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:19:28.590056    2276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:19:28.606337    2276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:19:28.619337    2276 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:19:28.624333    2276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:19:28.636342    2276 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:19:28.636342    2276 kubeadm.go:158] found existing configuration files:
	
	I1212 21:19:28.640335    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:19:28.653345    2276 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:19:28.657338    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:19:28.676455    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:19:28.689528    2276 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:19:28.692527    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:19:28.712950    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:19:28.727012    2276 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:19:28.730006    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:19:28.752648    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:19:28.764635    2276 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:19:28.768636    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:19:28.783631    2276 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:19:28.895250    2276 kubeadm.go:319] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1212 21:19:29.034981    2276 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:19:29.148063   11500 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:19:29.152600   11500 cli_runner.go:164] Run: docker exec -t no-preload-285600 dig +short host.docker.internal
	I1212 21:19:29.288119   11500 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:19:29.292140   11500 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:19:29.299142   11500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:29.322124   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:29.375121   11500 kubeadm.go:884] updating cluster {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:19:29.375121   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:29.378121   11500 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:29.408418   11500 docker.go:691] Got preloaded images: 
	I1212 21:19:29.408418   11500 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1212 21:19:29.408418   11500 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:19:29.422575   11500 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:29.428576   11500 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.434434   11500 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.434434   11500 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:29.439432   11500 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.439432   11500 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.443463   11500 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.445438   11500 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.449437   11500 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.450445   11500 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.454443   11500 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.454443   11500 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 21:19:29.457438   11500 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:29.458458   11500 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.463437   11500 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 21:19:29.465439   11500 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1212 21:19:29.493004   11500 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.551184   11500 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.599448   11500 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.659625   11500 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.710024   11500 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.766651   11500 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.771565   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.806127   11500 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1212 21:19:29.806127   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.806127   11500 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.811133   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	W1212 21:19:29.823135   11500 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.826122   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.846133   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.852129   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.862134   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1212 21:19:29.862134   11500 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1212 21:19:29.862134   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.862134   11500 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.862134   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1212 21:19:29.866134   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.878135   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1212 21:19:29.889139   11500 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.924149   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.953133   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.958139   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.973137   11500 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1212 21:19:29.973137   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:29.973137   11500 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.979136   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.979136   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:30.004145   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.004145   11500 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1212 21:19:30.004145   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:30.004145   11500 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:30.004145   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1212 21:19:30.009142   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:30.036155   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1212 21:19:30.110344   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.111335   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:30.111335   11500 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1212 21:19:30.111335   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.111335   11500 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:30.117333   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:30.118329   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:30.145351   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:30.151333   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 21:19:30.217129   11500 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1212 21:19:30.217244   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:30.217292   11500 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1212 21:19:30.226428   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1212 21:19:30.260968   11500 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1212 21:19:30.260968   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:30.260968   11500 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.263968   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.267968   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.267968   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.268975   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1212 21:19:30.272973   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.278968   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 21:19:30.278968   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1212 21:19:30.351207   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:30.353201   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.353201   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:30.353201   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1212 21:19:30.359212   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.359212   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 21:19:30.360235   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1212 21:19:30.482205   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1212 21:19:30.482205   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 21:19:30.482205   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1212 21:19:30.482205   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1212 21:19:30.484225   11500 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:19:30.485208   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:30.485208   11500 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.490204   11500 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.691207   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:30.696204   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:19:30.746217   11500 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 21:19:30.747215   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1212 21:19:30.888209   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 21:19:30.888209   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1212 21:19:31.077210   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1212 21:19:31.252229   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 21:19:31.252229   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1212 21:19:36.407728   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (5.1554171s)
	I1212 21:19:36.407775   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1212 21:19:36.407775   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 21:19:36.407775   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1212 21:19:38.427711   11652 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 21:19:38.427711   11652 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:19:38.427711   11652 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:19:38.428708   11652 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:19:38.428708   11652 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:19:38.428708   11652 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:19:38.433003   11652 out.go:252]   - Generating certificates and keys ...
	I1212 21:19:38.433003   11652 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:19:38.433529   11652 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:19:38.433750   11652 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:19:38.433844   11652 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:19:38.434013   11652 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:19:38.434013   11652 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:19:38.434013   11652 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:19:38.434013   11652 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [false-864500 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 21:19:38.434565   11652 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:19:38.435059   11652 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [false-864500 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 21:19:38.435278   11652 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:19:38.435278   11652 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:19:38.435278   11652 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:19:38.435278   11652 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:19:38.435278   11652 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:19:38.435893   11652 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:19:38.436106   11652 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:19:38.436106   11652 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:19:38.436106   11652 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:19:38.436106   11652 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:19:38.436704   11652 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:19:38.441578   11652 out.go:252]   - Booting up control plane ...
	I1212 21:19:38.441578   11652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:19:38.441578   11652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:19:38.442183   11652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:19:38.442293   11652 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:19:38.442293   11652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:19:38.442837   11652 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:19:38.442924   11652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:19:38.442924   11652 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:19:38.442924   11652 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:19:38.443881   11652 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:19:38.443881   11652 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502438319s
	I1212 21:19:38.443881   11652 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 21:19:38.443881   11652 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1212 21:19:38.443881   11652 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 21:19:38.443881   11652 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 21:19:38.443881   11652 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.073770417s
	I1212 21:19:38.444876   11652 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.799893849s
	I1212 21:19:38.444876   11652 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.50209012s
	I1212 21:19:38.444876   11652 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:19:38.444876   11652 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:19:38.444876   11652 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:19:38.444876   11652 kubeadm.go:319] [mark-control-plane] Marking the node false-864500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:19:38.445875   11652 kubeadm.go:319] [bootstrap-token] Using token: mp97cm.8hjjnzaub1tyagxk
	I1212 21:19:38.449877   11652 out.go:252]   - Configuring RBAC rules ...
	I1212 21:19:38.450877   11652 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:19:38.450877   11652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:19:38.450877   11652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:19:38.450877   11652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:19:38.450877   11652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:19:38.451879   11652 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:19:38.451879   11652 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:19:38.451879   11652 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 21:19:38.451879   11652 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 21:19:38.451879   11652 kubeadm.go:319] 
	I1212 21:19:38.451879   11652 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 21:19:38.451879   11652 kubeadm.go:319] 
	I1212 21:19:38.451879   11652 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 21:19:38.451879   11652 kubeadm.go:319] 
	I1212 21:19:38.451879   11652 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 21:19:38.451879   11652 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:19:38.451879   11652 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:19:38.451879   11652 kubeadm.go:319] 
	I1212 21:19:38.452872   11652 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 21:19:38.452872   11652 kubeadm.go:319] 
	I1212 21:19:38.452872   11652 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:19:38.452872   11652 kubeadm.go:319] 
	I1212 21:19:38.452872   11652 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 21:19:38.452872   11652 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:19:38.452872   11652 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:19:38.452872   11652 kubeadm.go:319] 
	I1212 21:19:38.452872   11652 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:19:38.452872   11652 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 21:19:38.452872   11652 kubeadm.go:319] 
	I1212 21:19:38.452872   11652 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mp97cm.8hjjnzaub1tyagxk \
	I1212 21:19:38.453878   11652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:09c060cffc5de927fd44ab9c7a28aa4e8ee2015281ea8365803cef45475b4e06 \
	I1212 21:19:38.453878   11652 kubeadm.go:319] 	--control-plane 
	I1212 21:19:38.453878   11652 kubeadm.go:319] 
	I1212 21:19:38.453878   11652 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:19:38.453878   11652 kubeadm.go:319] 
	I1212 21:19:38.453878   11652 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mp97cm.8hjjnzaub1tyagxk \
	I1212 21:19:38.453878   11652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:09c060cffc5de927fd44ab9c7a28aa4e8ee2015281ea8365803cef45475b4e06 
	I1212 21:19:38.453878   11652 cni.go:84] Creating CNI manager for "false"
	I1212 21:19:38.453878   11652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:19:38.458870   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:38.458870   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes false-864500 minikube.k8s.io/updated_at=2025_12_12T21_19_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=false-864500 minikube.k8s.io/primary=true
	I1212 21:19:38.523214   11652 ops.go:34] apiserver oom_adj: -16
	I1212 21:19:38.781357   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:39.282652   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:39.507114   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (3.0992898s)
	I1212 21:19:39.507114   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1212 21:19:39.507114   11500 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 21:19:39.507114   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1212 21:19:39.784029   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:40.282365   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:40.781513   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:41.282646   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:41.781434   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:42.282107   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:42.782628   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:43.282338   11652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:43.428588   11652 kubeadm.go:1114] duration metric: took 4.9746316s to wait for elevateKubeSystemPrivileges
	I1212 21:19:43.428588   11652 kubeadm.go:403] duration metric: took 25.2712454s to StartCluster
	I1212 21:19:43.428588   11652 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:43.429622   11652 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:19:43.430583   11652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:43.431590   11652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:19:43.431590   11652 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:19:43.431590   11652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:19:43.432613   11652 addons.go:70] Setting storage-provisioner=true in profile "false-864500"
	I1212 21:19:43.432613   11652 addons.go:239] Setting addon storage-provisioner=true in "false-864500"
	I1212 21:19:43.432613   11652 host.go:66] Checking if "false-864500" exists ...
	I1212 21:19:43.432613   11652 addons.go:70] Setting default-storageclass=true in profile "false-864500"
	I1212 21:19:43.432613   11652 config.go:182] Loaded profile config "false-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:19:43.432613   11652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "false-864500"
	I1212 21:19:43.435596   11652 out.go:179] * Verifying Kubernetes components...
	I1212 21:19:43.444601   11652 cli_runner.go:164] Run: docker container inspect false-864500 --format={{.State.Status}}
	I1212 21:19:43.444601   11652 cli_runner.go:164] Run: docker container inspect false-864500 --format={{.State.Status}}
	I1212 21:19:43.445598   11652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:43.504597   11652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:43.507589   11652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:19:43.507589   11652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:19:43.512594   11652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-864500
	I1212 21:19:43.533600   11652 addons.go:239] Setting addon default-storageclass=true in "false-864500"
	I1212 21:19:43.533600   11652 host.go:66] Checking if "false-864500" exists ...
	I1212 21:19:43.543592   11652 cli_runner.go:164] Run: docker container inspect false-864500 --format={{.State.Status}}
	I1212 21:19:43.576599   11652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62070 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-864500\id_rsa Username:docker}
	I1212 21:19:43.597585   11652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:19:43.597585   11652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:19:43.600585   11652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-864500
	I1212 21:19:43.658589   11652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62070 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-864500\id_rsa Username:docker}
	I1212 21:19:43.723686   11652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:19:43.911429   11652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:19:43.928428   11652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:19:43.929431   11652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:19:44.615772    3672 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:19:44.615772    3672 kubeadm.go:319] 
	I1212 21:19:44.615772    3672 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:19:44.620011    3672 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:19:44.620011    3672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:19:44.620011    3672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:19:44.620011    3672 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:19:44.620547    3672 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:19:44.620666    3672 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:19:44.621192    3672 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:19:44.621277    3672 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:19:44.621374    3672 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:19:44.621399    3672 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:19:44.621929    3672 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:19:44.622170    3672 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:19:44.622200    3672 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:19:44.622818    3672 kubeadm.go:319] OS: Linux
	I1212 21:19:44.623341    3672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:19:44.623484    3672 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:19:44.623515    3672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:19:44.623621    3672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:19:44.623681    3672 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:19:44.624284    3672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:19:44.624419    3672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:19:44.624419    3672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:19:44.624419    3672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:19:44.701398    3672 out.go:252]   - Generating certificates and keys ...
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:19:44.701851    3672 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:19:44.702496    3672 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:19:44.703029    3672 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:19:44.703108    3672 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:19:44.703163    3672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:19:44.703731    3672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:19:44.703821    3672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:19:44.703821    3672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:19:44.751684    3672 out.go:252]   - Booting up control plane ...
	I1212 21:19:44.751786    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:19:44.751786    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:19:44.752322    3672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:19:44.752440    3672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:19:44.752440    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:19:44.753089    3672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:19:44.753429    3672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:19:44.753634    3672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:19:44.754092    3672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:19:44.754359    3672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:19:44.754565    3672 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00021833s
	I1212 21:19:44.754612    3672 kubeadm.go:319] 
	I1212 21:19:44.754747    3672 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:19:44.754922    3672 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:19:44.755082    3672 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:19:44.755082    3672 kubeadm.go:319] 
	I1212 21:19:44.755289    3672 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:19:44.755289    3672 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:19:44.755289    3672 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:19:44.755289    3672 kubeadm.go:319] 
	I1212 21:19:44.755289    3672 kubeadm.go:403] duration metric: took 12m9.4078185s to StartCluster
	I1212 21:19:44.755289    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:19:44.760820    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:19:44.824660    3672 cri.go:89] found id: ""
	I1212 21:19:44.824660    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.824660    3672 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:19:44.824660    3672 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:19:44.829837    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:19:44.878307    3672 cri.go:89] found id: ""
	I1212 21:19:44.878307    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.878307    3672 logs.go:284] No container was found matching "etcd"
	I1212 21:19:44.878307    3672 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:19:44.883779    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:19:44.950014    3672 cri.go:89] found id: ""
	I1212 21:19:44.950014    3672 logs.go:282] 0 containers: []
	W1212 21:19:44.950014    3672 logs.go:284] No container was found matching "coredns"
	I1212 21:19:44.950014    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:19:44.955259    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:19:45.002810    3672 cri.go:89] found id: ""
	I1212 21:19:45.002810    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.002810    3672 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:19:45.002810    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:19:45.009047    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:19:45.053837    3672 cri.go:89] found id: ""
	I1212 21:19:45.053880    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.053880    3672 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:19:45.053880    3672 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:19:45.058452    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:19:45.107565    3672 cri.go:89] found id: ""
	I1212 21:19:45.107565    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.107565    3672 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:19:45.107565    3672 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:19:45.114689    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:19:45.160435    3672 cri.go:89] found id: ""
	I1212 21:19:45.160435    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.160435    3672 logs.go:284] No container was found matching "kindnet"
	I1212 21:19:45.160435    3672 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:19:45.164686    3672 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:19:45.207166    3672 cri.go:89] found id: ""
	I1212 21:19:45.207166    3672 logs.go:282] 0 containers: []
	W1212 21:19:45.207166    3672 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:19:45.207166    3672 logs.go:123] Gathering logs for kubelet ...
	I1212 21:19:45.208179    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:19:45.284363    3672 logs.go:123] Gathering logs for dmesg ...
	I1212 21:19:45.284439    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:19:45.329298    3672 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:19:45.329298    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:19:45.425787    3672 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:19:45.426341    3672 logs.go:123] Gathering logs for Docker ...
	I1212 21:19:45.426389    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:19:45.457222    3672 logs.go:123] Gathering logs for container status ...
	I1212 21:19:45.457222    3672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:19:45.509039    3672 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:19:45.509579    3672 out.go:285] * 
	W1212 21:19:45.509702    3672 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:19:45.509989    3672 out.go:285] * 
	W1212 21:19:45.513435    3672 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:19:45.565706    3672 out.go:203] 
	W1212 21:19:45.605838    3672 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00021833s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:19:45.605838    3672 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:19:45.605838    3672 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:19:45.618910    3672 out.go:203] 
	I1212 21:19:45.278719   11652 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.367268s)
	I1212 21:19:45.278719   11652 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5550086s)
	I1212 21:19:45.278719   11652 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1212 21:19:45.283270   11652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-864500
	I1212 21:19:45.346529   11652 node_ready.go:35] waiting up to 15m0s for node "false-864500" to be "Ready" ...
	I1212 21:19:45.358522   11652 node_ready.go:49] node "false-864500" is "Ready"
	I1212 21:19:45.358522   11652 node_ready.go:38] duration metric: took 11.993ms for node "false-864500" to be "Ready" ...
	I1212 21:19:45.358522   11652 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:19:45.366728   11652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:19:45.808020   11652 kapi.go:214] "coredns" deployment in "kube-system" namespace and "false-864500" context rescaled to 1 replicas
	I1212 21:19:46.337908   11652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.4084387s)
	I1212 21:19:46.337908   11652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.409442s)
	I1212 21:19:46.337908   11652 api_server.go:72] duration metric: took 2.9062716s to wait for apiserver process to appear ...
	I1212 21:19:46.337908   11652 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:19:46.338917   11652 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62074/healthz ...
	I1212 21:19:46.369930   11652 api_server.go:279] https://127.0.0.1:62074/healthz returned 200:
	ok
	I1212 21:19:46.372912   11652 api_server.go:141] control plane version: v1.34.2
	I1212 21:19:46.372912   11652 api_server.go:131] duration metric: took 35.0033ms to wait for apiserver health ...
	I1212 21:19:46.372912   11652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:19:46.379912   11652 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 21:19:42.585494   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.0783311s)
	I1212 21:19:42.585494   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1212 21:19:42.585494   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:42.585494   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1212 21:19:46.136910   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (3.5513599s)
	I1212 21:19:46.136910   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1212 21:19:46.136910   11500 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:19:46.136910   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1212 21:19:47.046821    2276 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1212 21:19:47.046821    2276 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:19:47.046821    2276 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:19:47.047832    2276 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:19:47.047832    2276 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:19:47.047832    2276 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:19:47.054809    2276 out.go:252]   - Generating certificates and keys ...
	I1212 21:19:47.054809    2276 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:19:47.054809    2276 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:19:47.054809    2276 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:19:47.054809    2276 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:19:47.055813    2276 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:19:47.055813    2276 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:19:47.055813    2276 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:19:47.055813    2276 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-246400] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1212 21:19:47.055813    2276 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:19:47.056805    2276 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-246400] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1212 21:19:47.056805    2276 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:19:47.056805    2276 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:19:47.056805    2276 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:19:47.056805    2276 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:19:47.056805    2276 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:19:47.056805    2276 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:19:47.056805    2276 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:19:47.057805    2276 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:19:47.057805    2276 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:19:47.057805    2276 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:19:47.062811    2276 out.go:252]   - Booting up control plane ...
	I1212 21:19:47.063808    2276 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:19:47.063808    2276 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:19:47.063808    2276 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:19:47.063808    2276 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:19:47.063808    2276 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:19:47.063808    2276 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:19:47.063808    2276 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:19:47.064809    2276 kubeadm.go:319] [apiclient] All control plane components are healthy after 11.003577 seconds
	I1212 21:19:47.064809    2276 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:19:47.064809    2276 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:19:47.064809    2276 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:19:47.065819    2276 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-246400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:19:47.065819    2276 kubeadm.go:319] [bootstrap-token] Using token: 4wexxj.fvcmzbgx2xm991ou
	I1212 21:19:47.071806    2276 out.go:252]   - Configuring RBAC rules ...
	I1212 21:19:47.071806    2276 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:19:47.071806    2276 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:19:47.072813    2276 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:19:47.072813    2276 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:19:47.072813    2276 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:19:47.072813    2276 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:19:47.072813    2276 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:19:47.073817    2276 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 21:19:47.073817    2276 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 21:19:47.073817    2276 kubeadm.go:319] 
	I1212 21:19:47.073817    2276 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 21:19:47.073817    2276 kubeadm.go:319] 
	I1212 21:19:47.073817    2276 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 21:19:47.073817    2276 kubeadm.go:319] 
	I1212 21:19:47.073817    2276 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 21:19:47.073817    2276 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:19:47.073817    2276 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:19:47.073817    2276 kubeadm.go:319] 
	I1212 21:19:47.073817    2276 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 21:19:47.073817    2276 kubeadm.go:319] 
	I1212 21:19:47.074813    2276 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:19:47.074813    2276 kubeadm.go:319] 
	I1212 21:19:47.074813    2276 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 21:19:47.074813    2276 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:19:47.074813    2276 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:19:47.074813    2276 kubeadm.go:319] 
	I1212 21:19:47.074813    2276 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:19:47.074813    2276 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 21:19:47.074813    2276 kubeadm.go:319] 
	I1212 21:19:47.074813    2276 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4wexxj.fvcmzbgx2xm991ou \
	I1212 21:19:47.075813    2276 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:09c060cffc5de927fd44ab9c7a28aa4e8ee2015281ea8365803cef45475b4e06 \
	I1212 21:19:47.075813    2276 kubeadm.go:319] 	--control-plane 
	I1212 21:19:47.075813    2276 kubeadm.go:319] 
	I1212 21:19:47.075813    2276 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:19:47.075813    2276 kubeadm.go:319] 
	I1212 21:19:47.075813    2276 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4wexxj.fvcmzbgx2xm991ou \
	I1212 21:19:47.075813    2276 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:09c060cffc5de927fd44ab9c7a28aa4e8ee2015281ea8365803cef45475b4e06 
	I1212 21:19:47.075813    2276 cni.go:84] Creating CNI manager for ""
	I1212 21:19:47.075813    2276 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:47.077817    2276 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:19:46.381907   11652 system_pods.go:59] 8 kube-system pods found
	I1212 21:19:46.381907   11652 system_pods.go:61] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.381907   11652 system_pods.go:61] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.381907   11652 system_pods.go:61] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:46.381907   11652 system_pods.go:61] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:46.381907   11652 system_pods.go:61] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:46.381907   11652 system_pods.go:61] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:46.381907   11652 system_pods.go:61] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:19:46.381907   11652 system_pods.go:61] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending
	I1212 21:19:46.381907   11652 system_pods.go:74] duration metric: took 8.9952ms to wait for pod list to return data ...
	I1212 21:19:46.381907   11652 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:19:46.384933   11652 addons.go:530] duration metric: took 2.9532963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 21:19:46.401911   11652 default_sa.go:45] found service account: "default"
	I1212 21:19:46.401911   11652 default_sa.go:55] duration metric: took 20.0037ms for default service account to be created ...
	I1212 21:19:46.401911   11652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:19:46.418908   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:46.418908   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.418908   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.418908   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:46.418908   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:46.418908   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:46.418908   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:46.418908   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:19:46.418908   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending
	I1212 21:19:46.418908   11652 retry.go:31] will retry after 284.103541ms: missing components: kube-dns, kube-proxy
	I1212 21:19:46.715265   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:46.716282   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.716282   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:46.716282   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:46.716282   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:46.716282   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:46.716282   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:46.716282   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:19:46.716282   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:19:46.716282   11652 retry.go:31] will retry after 373.817159ms: missing components: kube-dns, kube-proxy
	I1212 21:19:47.098803   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:47.098803   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:47.098803   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:47.098803   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:47.098803   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:47.098803   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:47.098803   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:47.098803   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:19:47.098803   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:19:47.098803   11652 retry.go:31] will retry after 379.304312ms: missing components: kube-dns, kube-proxy
	I1212 21:19:47.567778   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:47.567778   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:47.567778   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:47.567778   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:47.567778   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:47.567778   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:47.567778   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:47.567778   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running
	I1212 21:19:47.567778   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:19:47.567778   11652 retry.go:31] will retry after 421.404403ms: missing components: kube-dns, kube-proxy
	I1212 21:19:48.000679   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:48.000679   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:48.000679   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:48.000679   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:48.000679   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:19:48.000679   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:48.000679   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:19:48.000679   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running
	I1212 21:19:48.000679   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:19:48.000679   11652 retry.go:31] will retry after 751.493282ms: missing components: kube-dns, kube-proxy
	I1212 21:19:48.764244   11652 system_pods.go:86] 8 kube-system pods found
	I1212 21:19:48.764272   11652 system_pods.go:89] "coredns-66bc5c9577-bm2dt" [384e171a-0932-4fe2-8fdf-36e53fd51976] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:48.764272   11652 system_pods.go:89] "coredns-66bc5c9577-jlqzc" [bc64cb51-4afa-4f6c-be2f-a44005026bf4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:19:48.764272   11652 system_pods.go:89] "etcd-false-864500" [6dc1f8a7-a396-42d3-a205-906f922a892e] Running
	I1212 21:19:48.764272   11652 system_pods.go:89] "kube-apiserver-false-864500" [f1cdc781-13c9-48fd-aefe-3ea4b4b9517d] Running
	I1212 21:19:48.764272   11652 system_pods.go:89] "kube-controller-manager-false-864500" [c12c6f8a-f2c4-4760-a223-2a0f1117d8ff] Running
	I1212 21:19:48.764272   11652 system_pods.go:89] "kube-proxy-vtjvf" [6ff58bf5-f01e-4839-a9c6-fa6c98c03348] Running
	I1212 21:19:48.764272   11652 system_pods.go:89] "kube-scheduler-false-864500" [052b1b39-9ef7-46c9-8ede-3b0f669d57a5] Running
	I1212 21:19:48.764272   11652 system_pods.go:89] "storage-provisioner" [6b97764c-cf5d-4822-a923-216dea247c34] Running
	I1212 21:19:48.764272   11652 system_pods.go:126] duration metric: took 2.3623233s to wait for k8s-apps to be running ...
	I1212 21:19:48.764272   11652 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:19:48.768048   11652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:19:48.788108   11652 system_svc.go:56] duration metric: took 23.8365ms WaitForService to wait for kubelet
	I1212 21:19:48.788108   11652 kubeadm.go:587] duration metric: took 5.3564336s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:19:48.788108   11652 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:19:48.795065   11652 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1212 21:19:48.795086   11652 node_conditions.go:123] node cpu capacity is 16
	I1212 21:19:48.795154   11652 node_conditions.go:105] duration metric: took 7.0245ms to run NodePressure ...
	I1212 21:19:48.795154   11652 start.go:242] waiting for startup goroutines ...
	I1212 21:19:48.795154   11652 start.go:247] waiting for cluster config update ...
	I1212 21:19:48.795154   11652 start.go:256] writing updated cluster config ...
	I1212 21:19:48.800267   11652 ssh_runner.go:195] Run: rm -f paused
	I1212 21:19:48.808210   11652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:19:48.818057   11652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bm2dt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:19:47.086813    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:19:47.117818    2276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 21:19:47.141822    2276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:19:47.147817    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-246400 minikube.k8s.io/updated_at=2025_12_12T21_19_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=old-k8s-version-246400 minikube.k8s.io/primary=true
	I1212 21:19:47.147817    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:47.214602    2276 ops.go:34] apiserver oom_adj: -16
	I1212 21:19:47.331267    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:47.830013    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:48.332203    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:48.833610    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:49.330999    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:49.831411    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:47.139819   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.002894s)
	I1212 21:19:47.139819   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1212 21:19:47.139819   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 21:19:47.139819   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1212 21:19:48.789102   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.6492564s)
	I1212 21:19:48.789102   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1212 21:19:48.789102   11500 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1212 21:19:48.789102   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1212 21:19:50.257704   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.4685787s)
	I1212 21:19:50.257704   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1212 21:19:50.257704   11500 cache_images.go:125] Successfully loaded all cached images
	I1212 21:19:50.257704   11500 cache_images.go:94] duration metric: took 20.8489565s to LoadCachedImages
	I1212 21:19:50.257704   11500 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:19:50.257704   11500 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:19:50.261603   11500 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:19:50.344352   11500 cni.go:84] Creating CNI manager for ""
	I1212 21:19:50.344408   11500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:50.344408   11500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:19:50.344408   11500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-285600 NodeName:no-preload-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:19:50.344408   11500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-285600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:19:50.348361   11500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:19:50.364988   11500 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 21:19:50.369563   11500 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1212 21:19:51.493414   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 21:19:51.504639   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 21:19:51.504639   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 21:19:51.558972   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:19:51.635221   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 21:19:51.681233   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 21:19:51.681233   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 21:19:51.756818   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	W1212 21:19:50.833014   11652 pod_ready.go:104] pod "coredns-66bc5c9577-bm2dt" is not "Ready", error: <nil>
	W1212 21:19:52.833513   11652 pod_ready.go:104] pod "coredns-66bc5c9577-bm2dt" is not "Ready", error: <nil>
	I1212 21:19:50.330517    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:50.835429    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:51.329746    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:51.830817    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:52.331481    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:52.831528    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:53.331872    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:53.830769    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:54.331281    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:54.833458    2276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:19:51.834819   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 21:19:51.834819   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 21:19:53.477870   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:19:53.490869   11500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1212 21:19:53.509873   11500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:19:53.530334   11500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 21:19:53.556109   11500 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:19:53.563308   11500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:53.582907   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:53.731002   11500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:19:53.755449   11500 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600 for IP: 192.168.121.2
	I1212 21:19:53.755449   11500 certs.go:195] generating shared ca certs ...
	I1212 21:19:53.755449   11500 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.756045   11500 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:19:53.756397   11500 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:19:53.756489   11500 certs.go:257] generating profile certs ...
	I1212 21:19:53.756900   11500 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key
	I1212 21:19:53.756998   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt with IP's: []
	I1212 21:19:53.870262   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt ...
	I1212 21:19:53.871257   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt: {Name:mkea463969f96c4d6685797c0f8ce6eb953748e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.872077   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key ...
	I1212 21:19:53.872077   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key: {Name:mka05d4dd201d247c11decfb29bbc83837f58b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.873212   11500 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6
	I1212 21:19:53.873212   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.121.2]
	I1212 21:19:53.914497   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 ...
	I1212 21:19:53.914497   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6: {Name:mke5db4d56520ca68011f83a06ddb400a6969701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.915547   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6 ...
	I1212 21:19:53.915547   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6: {Name:mk3ebc8c919f646cbe4ff90f62c381d3f2e2546e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.917407   11500 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt
	I1212 21:19:53.933194   11500 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key
	I1212 21:19:53.933780   11500 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key
	I1212 21:19:53.933780   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt with IP's: []
	I1212 21:19:53.983775   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt ...
	I1212 21:19:53.983775   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt: {Name:mk9e2611f6249b5253a898d387ce0751b5cc75b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.984780   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key ...
	I1212 21:19:53.984780   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key: {Name:mk5f65e5d4c3a5a658327bd443d14f8a81b45c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.999837   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:19:54.000947   11500 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:19:54.000947   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:19:54.000947   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:19:54.001537   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:19:54.001537   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:19:54.002163   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:19:54.002928   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:19:54.035794   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:19:54.062576   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:19:54.093287   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:19:54.120671   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:19:54.150291   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:19:54.177911   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:19:54.212125   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:19:54.240916   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:19:54.271773   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:19:54.303492   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:19:54.335863   11500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:19:54.366123   11500 ssh_runner.go:195] Run: openssl version
	I1212 21:19:54.383927   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.404179   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:19:54.423064   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.433274   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.437454   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.494300   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:54.512310   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:54.529266   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.548180   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:19:54.566859   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.577921   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.582721   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.639577   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:19:54.658152   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 21:19:54.680113   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.698677   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:19:54.718546   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.727826   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.733349   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.781025   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:19:54.798882   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 21:19:54.818946   11500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:19:54.830191   11500 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:19:54.830378   11500 kubeadm.go:401] StartCluster: {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:54.835162   11500 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:19:54.871664   11500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:19:54.888804   11500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:19:54.904866   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:19:54.909199   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:19:54.923859   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:19:54.923859   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:19:54.927851   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:19:54.940842   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:19:54.944845   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:19:54.962369   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:19:54.976883   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:19:54.981084   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:19:54.997668   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:19:55.011030   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:19:55.015161   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:19:55.032693   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:19:55.045702   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:19:55.049694   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:19:55.065693   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:19:55.178939   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:19:55.265961   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:19:55.374410   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> Docker <==
	Dec 12 21:07:25 kubernetes-upgrade-716700 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.443720841Z" level=info msg="Starting up"
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.465723705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.465932523Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.465947624Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.481700959Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.604816892Z" level=info msg="Loading containers: start."
	Dec 12 21:07:25 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:25.611998901Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.043165057Z" level=info msg="Restoring containers: start."
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.196232008Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.246705246Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.633438216Z" level=info msg="Loading containers: done."
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667311660Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667414168Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667429470Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667437270Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667446371Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667469173Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.667538179Z" level=info msg="Initializing buildkit"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.800136912Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.806032207Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.806247225Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:07:33 kubernetes-upgrade-716700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.806250825Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:07:33 kubernetes-upgrade-716700 dockerd[1437]: time="2025-12-12T21:07:33.806291728Z" level=info msg="API listen on /var/run/docker.sock"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.366102] tmpfs: Unknown parameter 'noswap'
	[  +1.154764] CPU: 10 PID: 397405 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7f071aac0b20
	[  +0.000009] Code: Unable to access opcode bytes at RIP 0x7f071aac0af6.
	[  +0.000001] RSP: 002b:00007ffc97b7f0e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.971886] CPU: 15 PID: 397952 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f853faf5b20
	[  +0.000009] Code: Unable to access opcode bytes at RIP 0x7f853faf5af6.
	[  +0.000001] RSP: 002b:00007ffe5bc61300 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +10.986277] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:20:00 up  2:21,  0 user,  load average: 7.33, 6.22, 4.66
	Linux kubernetes-upgrade-716700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:19:57 kubernetes-upgrade-716700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:57 kubernetes-upgrade-716700 kubelet[26030]: E1212 21:19:57.681696   26030 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:19:57 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:19:57 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:19:58 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 12 21:19:58 kubernetes-upgrade-716700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:58 kubernetes-upgrade-716700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:58 kubernetes-upgrade-716700 kubelet[26057]: E1212 21:19:58.415283   26057 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:19:58 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:19:58 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 338.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:59 kubernetes-upgrade-716700 kubelet[26076]: E1212 21:19:59.202111   26076 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 339.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:19:59 kubernetes-upgrade-716700 kubelet[26185]: E1212 21:19:59.922921   26185 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:19:59 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:20:00 kubernetes-upgrade-716700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 340.
	Dec 12 21:20:00 kubernetes-upgrade-716700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:20:00 kubernetes-upgrade-716700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-716700 -n kubernetes-upgrade-716700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-716700 -n kubernetes-upgrade-716700: exit status 2 (711.1924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-716700" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-716700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-716700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-716700: (3.0339413s)
--- FAIL: TestKubernetesUpgrade (844.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (531.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1212 21:19:22.095740   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:19:27.560453   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:19:37.915297   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m48.6546703s)

                                                
                                                
-- stdout --
	* [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:19:11.785260   11500 out.go:360] Setting OutFile to fd 1476 ...
	I1212 21:19:11.829252   11500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:19:11.829252   11500 out.go:374] Setting ErrFile to fd 1332...
	I1212 21:19:11.829252   11500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:19:11.844255   11500 out.go:368] Setting JSON to false
	I1212 21:19:11.847260   11500 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8489,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:19:11.847260   11500 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:19:11.852259   11500 out.go:179] * [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:19:11.857363   11500 notify.go:221] Checking for updates...
	I1212 21:19:11.859315   11500 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:19:11.861298   11500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:19:11.865443   11500 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:19:11.868158   11500 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:19:11.875325   11500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:19:11.878314   11500 config.go:182] Loaded profile config "false-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:19:11.879318   11500 config.go:182] Loaded profile config "kubernetes-upgrade-716700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:19:11.879318   11500 config.go:182] Loaded profile config "old-k8s-version-246400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1212 21:19:11.879318   11500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:19:12.008386   11500 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:19:12.011393   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:12.299490   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:12.280377687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:12.304502   11500 out.go:179] * Using the docker driver based on user configuration
	I1212 21:19:12.307505   11500 start.go:309] selected driver: docker
	I1212 21:19:12.308491   11500 start.go:927] validating driver "docker" against <nil>
	I1212 21:19:12.308491   11500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:19:12.359790   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:12.647149   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:12.620326662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:12.647149   11500 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 21:19:12.648155   11500 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:19:12.650152   11500 out.go:179] * Using Docker Desktop driver with root privileges
	I1212 21:19:12.654146   11500 cni.go:84] Creating CNI manager for ""
	I1212 21:19:12.654146   11500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:12.654146   11500 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 21:19:12.654146   11500 start.go:353] cluster config:
	{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:12.658154   11500 out.go:179] * Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	I1212 21:19:12.661149   11500 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:19:12.665145   11500 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:19:12.668154   11500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:19:12.668154   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:12.669156   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:12.669156   11500 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:19:12.669156   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json: {Name:mka3f24491318cc00f75a0705eb5398b2088bad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:13.078698   11500 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:19:13.078698   11500 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:19:13.078698   11500 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:19:13.078698   11500 start.go:360] acquireMachinesLock for no-preload-285600: {Name:mk2731f875a3a62f76017c58cc7d43a1bb1f8ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:13.078698   11500 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-285600"
	I1212 21:19:13.078698   11500 start.go:93] Provisioning new machine with config: &{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:19:13.078698   11500 start.go:125] createHost starting for "" (driver="docker")
	I1212 21:19:13.082699   11500 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 21:19:13.082699   11500 start.go:159] libmachine.API.Create for "no-preload-285600" (driver="docker")
	I1212 21:19:13.082699   11500 client.go:173] LocalClient.Create starting
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Decoding PEM data...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Parsing certificate...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Decoding PEM data...
	I1212 21:19:13.083687   11500 main.go:143] libmachine: Parsing certificate...
	I1212 21:19:13.090686   11500 cli_runner.go:164] Run: docker network inspect no-preload-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 21:19:13.190881   11500 cli_runner.go:211] docker network inspect no-preload-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 21:19:13.196895   11500 network_create.go:284] running [docker network inspect no-preload-285600] to gather additional debugging logs...
	I1212 21:19:13.196895   11500 cli_runner.go:164] Run: docker network inspect no-preload-285600
	W1212 21:19:15.028733   11500 cli_runner.go:211] docker network inspect no-preload-285600 returned with exit code 1
	I1212 21:19:15.028733   11500 cli_runner.go:217] Completed: docker network inspect no-preload-285600: (1.8318087s)
	I1212 21:19:15.029266   11500 network_create.go:287] error running [docker network inspect no-preload-285600]: docker network inspect no-preload-285600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-285600 not found
	I1212 21:19:15.029320   11500 network_create.go:289] output of [docker network inspect no-preload-285600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-285600 not found
	
	** /stderr **
	I1212 21:19:15.035947   11500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:19:15.193358   11500 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.224489   11500 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.271604   11500 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.302308   11500 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.349210   11500 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.401607   11500 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.440870   11500 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.472180   11500 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:19:15.516010   11500 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9cdb0}
	I1212 21:19:15.516585   11500 network_create.go:124] attempt to create docker network no-preload-285600 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1212 21:19:15.521595   11500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-285600 no-preload-285600
	I1212 21:19:15.789772   11500 network_create.go:108] docker network no-preload-285600 192.168.121.0/24 created
	I1212 21:19:15.789772   11500 kic.go:121] calculated static IP "192.168.121.2" for the "no-preload-285600" container
	I1212 21:19:15.809008   11500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 21:19:15.897795   11500 cli_runner.go:164] Run: docker volume create no-preload-285600 --label name.minikube.sigs.k8s.io=no-preload-285600 --label created_by.minikube.sigs.k8s.io=true
	I1212 21:19:15.990942   11500 oci.go:103] Successfully created a docker volume no-preload-285600
	I1212 21:19:15.996535   11500 cli_runner.go:164] Run: docker run --rm --name no-preload-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --entrypoint /usr/bin/test -v no-preload-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 21:19:16.240677   11500 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.240677   11500 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:16.248671   11500 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.249673   11500 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:16.254674   11500 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:16.260668   11500 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:16.264673   11500 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.264673   11500 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:16.283861   11500 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:16.313574   11500 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.313574   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1212 21:19:16.314607   11500 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.6453931s
	I1212 21:19:16.314607   11500 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1212 21:19:16.330885   11500 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.331701   11500 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	W1212 21:19:16.339638   11500 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.348041   11500 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:16.381931   11500 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.381988   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1212 21:19:16.381988   11500 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.7127735s
	I1212 21:19:16.381988   11500 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1212 21:19:16.390465   11500 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.390857   11500 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:16.401875   11500 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1212 21:19:16.415849   11500 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.449852   11500 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:19:16.450860   11500 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1212 21:19:16.450860   11500 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.7816444s
	I1212 21:19:16.450860   11500 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	W1212 21:19:16.481851   11500 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:16.548106   11500 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:16.615682   11500 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:16.761862   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:16.775859   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:16.797889   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:16.824668   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:16.858721   11500 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:17.584332   11500 cli_runner.go:217] Completed: docker run --rm --name no-preload-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --entrypoint /usr/bin/test -v no-preload-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.5877722s)
	I1212 21:19:17.584332   11500 oci.go:107] Successfully prepared a docker volume no-preload-285600
	I1212 21:19:17.584332   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:17.588335   11500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:19:17.704335   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1212 21:19:17.704335   11500 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 5.0350991s
	I1212 21:19:17.704335   11500 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1212 21:19:17.833342   11500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:19:17.815740068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:19:17.836342   11500 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 21:19:18.101913   11500 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-285600 --name no-preload-285600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-285600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-285600 --network no-preload-285600 --ip 192.168.121.2 --volume no-preload-285600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 21:19:18.418174   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1212 21:19:18.418174   11500 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.7489275s
	I1212 21:19:18.418174   11500 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 21:19:18.437458   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1212 21:19:18.437458   11500 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.7682114s
	I1212 21:19:18.437458   11500 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1212 21:19:18.645105   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1212 21:19:18.645105   11500 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.9758542s
	I1212 21:19:18.645105   11500 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 21:19:18.699585   11500 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1212 21:19:18.699755   11500 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 6.0305038s
	I1212 21:19:18.699755   11500 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 21:19:18.699755   11500 cache.go:87] Successfully saved all images to host disk.
	I1212 21:19:18.814270   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Running}}
	I1212 21:19:18.878263   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:18.942266   11500 cli_runner.go:164] Run: docker exec no-preload-285600 stat /var/lib/dpkg/alternatives/iptables
	I1212 21:19:19.064845   11500 oci.go:144] the created container "no-preload-285600" has a running status.
	I1212 21:19:19.064845   11500 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa...
	I1212 21:19:19.101842   11500 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 21:19:19.181163   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:19.241174   11500 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 21:19:19.241174   11500 kic_runner.go:114] Args: [docker exec --privileged no-preload-285600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 21:19:19.361414   11500 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa...
	I1212 21:19:21.556540   11500 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:19:21.609928   11500 machine.go:94] provisionDockerMachine start ...
	I1212 21:19:21.612924   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:21.672687   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:21.686901   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:21.686901   11500 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:19:21.861446   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:19:21.861446   11500 ubuntu.go:182] provisioning hostname "no-preload-285600"
	I1212 21:19:21.864807   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:21.925344   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:21.925344   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:21.925344   11500 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-285600 && echo "no-preload-285600" | sudo tee /etc/hostname
	I1212 21:19:22.111736   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:19:22.114749   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.166737   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:22.167742   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:22.167742   11500 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:19:22.351585   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:19:22.351585   11500 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:19:22.351635   11500 ubuntu.go:190] setting up certificates
	I1212 21:19:22.351690   11500 provision.go:84] configureAuth start
	I1212 21:19:22.354709   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:22.410676   11500 provision.go:143] copyHostCerts
	I1212 21:19:22.410676   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:19:22.411683   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:19:22.411683   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:19:22.412667   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:19:22.412667   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:19:22.412667   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:19:22.413685   11500 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:19:22.413685   11500 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:19:22.413685   11500 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:19:22.414669   11500 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-285600 san=[127.0.0.1 192.168.121.2 localhost minikube no-preload-285600]
	I1212 21:19:22.570511   11500 provision.go:177] copyRemoteCerts
	I1212 21:19:22.575186   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:19:22.578325   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.636170   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:22.774439   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:19:22.810478   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:19:22.841287   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:19:22.872627   11500 provision.go:87] duration metric: took 520.9288ms to configureAuth
	I1212 21:19:22.872627   11500 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:19:22.872627   11500 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:19:22.875628   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:22.929627   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:22.929627   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:22.929627   11500 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:19:23.103684   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:19:23.103684   11500 ubuntu.go:71] root file system type: overlay
	I1212 21:19:23.104289   11500 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:19:23.109961   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:23.165648   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:23.166666   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:23.166666   11500 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:19:23.350814   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:19:23.355834   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:23.422178   11500 main.go:143] libmachine: Using SSH client type: native
	I1212 21:19:23.422178   11500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62132 <nil> <nil>}
	I1212 21:19:23.422178   11500 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:19:24.755983   11500 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-12 21:19:23.338300308 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 21:19:24.755983   11500 machine.go:97] duration metric: took 3.1460049s to provisionDockerMachine
	I1212 21:19:24.755983   11500 client.go:176] duration metric: took 11.6730993s to LocalClient.Create
	I1212 21:19:24.755983   11500 start.go:167] duration metric: took 11.6730993s to libmachine.API.Create "no-preload-285600"
	I1212 21:19:24.755983   11500 start.go:293] postStartSetup for "no-preload-285600" (driver="docker")
	I1212 21:19:24.755983   11500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:19:24.759925   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:19:24.762906   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:24.814451   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:24.943802   11500 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:19:24.951956   11500 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:19:24.951956   11500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:19:24.951956   11500 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:19:24.951956   11500 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:19:24.953335   11500 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:19:24.958227   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:19:24.973362   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:19:25.005913   11500 start.go:296] duration metric: took 249.9267ms for postStartSetup
	I1212 21:19:25.010921   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:25.070030   11500 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:19:25.076026   11500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:19:25.079022   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.136022   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:25.258665   11500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:19:25.270120   11500 start.go:128] duration metric: took 12.1912296s to createHost
	I1212 21:19:25.270120   11500 start.go:83] releasing machines lock for "no-preload-285600", held for 12.1912296s
	I1212 21:19:25.273729   11500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:19:25.329383   11500 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:19:25.333382   11500 ssh_runner.go:195] Run: cat /version.json
	I1212 21:19:25.333382   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.336386   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:25.387387   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:19:25.389394   11500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62132 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	W1212 21:19:25.499964   11500 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:19:25.514068   11500 ssh_runner.go:195] Run: systemctl --version
	I1212 21:19:25.530410   11500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:19:25.539419   11500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:19:25.544827   11500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1212 21:19:25.585530   11500 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:19:25.585580   11500 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:19:25.618893   11500 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:19:25.618893   11500 start.go:496] detecting cgroup driver to use...
	I1212 21:19:25.618893   11500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:25.618893   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:25.645975   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:19:25.664008   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:19:25.678013   11500 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:19:25.682009   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:19:25.700007   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:25.721787   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:19:25.743166   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:19:25.766435   11500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:19:25.784623   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:19:25.802637   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:19:25.820626   11500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:19:25.839624   11500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:19:25.860985   11500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:19:25.876598   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:26.043670   11500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:19:26.260282   11500 start.go:496] detecting cgroup driver to use...
	I1212 21:19:26.260282   11500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:19:26.265266   11500 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:19:26.291525   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:26.318524   11500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:19:26.403712   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:19:26.429934   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:19:26.448819   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:19:26.477300   11500 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:19:26.488903   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:19:26.508895   11500 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:19:26.536014   11500 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:19:26.693194   11500 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:19:26.860928   11500 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:19:26.861148   11500 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:19:26.885980   11500 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:19:26.908995   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:27.076743   11500 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:19:28.078996   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:19:28.102938   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:19:28.127357   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:28.155347   11500 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:19:28.315684   11500 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:19:28.469677   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:28.625339   11500 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:19:28.650343   11500 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:19:28.671957   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:28.821662   11500 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:19:28.940893   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:19:28.962463   11500 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:19:28.967843   11500 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:19:28.976555   11500 start.go:564] Will wait 60s for crictl version
	I1212 21:19:28.980749   11500 ssh_runner.go:195] Run: which crictl
	I1212 21:19:28.993035   11500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:19:29.038667   11500 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:19:29.041667   11500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:29.095509   11500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:19:29.148063   11500 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:19:29.152600   11500 cli_runner.go:164] Run: docker exec -t no-preload-285600 dig +short host.docker.internal
	I1212 21:19:29.288119   11500 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:19:29.292140   11500 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:19:29.299142   11500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:29.322124   11500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:19:29.375121   11500 kubeadm.go:884] updating cluster {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:19:29.375121   11500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:19:29.378121   11500 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:19:29.408418   11500 docker.go:691] Got preloaded images: 
	I1212 21:19:29.408418   11500 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1212 21:19:29.408418   11500 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:19:29.422575   11500 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:29.428576   11500 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.434434   11500 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.434434   11500 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:29.439432   11500 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.439432   11500 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.443463   11500 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.445438   11500 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.449437   11500 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.450445   11500 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.454443   11500 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.454443   11500 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 21:19:29.457438   11500 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:29.458458   11500 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.463437   11500 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 21:19:29.465439   11500 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1212 21:19:29.493004   11500 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.551184   11500 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.599448   11500 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.659625   11500 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.710024   11500 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1212 21:19:29.766651   11500 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.771565   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.806127   11500 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1212 21:19:29.806127   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.806127   11500 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 21:19:29.811133   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	W1212 21:19:29.823135   11500 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.826122   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.846133   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.852129   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 21:19:29.862134   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1212 21:19:29.862134   11500 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1212 21:19:29.862134   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.862134   11500 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.862134   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1212 21:19:29.866134   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 21:19:29.878135   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1212 21:19:29.889139   11500 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1212 21:19:29.924149   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:29.953133   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.958139   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 21:19:29.973137   11500 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1212 21:19:29.973137   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:29.973137   11500 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:29.979136   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:29.979136   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 21:19:30.004145   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.004145   11500 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1212 21:19:30.004145   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:30.004145   11500 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:30.004145   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1212 21:19:30.009142   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1212 21:19:30.036155   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1212 21:19:30.110344   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.111335   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:30.111335   11500 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1212 21:19:30.111335   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.111335   11500 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:30.117333   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 21:19:30.118329   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:30.145351   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:19:30.151333   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 21:19:30.217129   11500 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1212 21:19:30.217244   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:30.217292   11500 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1212 21:19:30.226428   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1212 21:19:30.260968   11500 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1212 21:19:30.260968   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:30.260968   11500 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.263968   11500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 21:19:30.267968   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.267968   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.268975   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1212 21:19:30.272973   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 21:19:30.278968   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 21:19:30.278968   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1212 21:19:30.351207   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:19:30.353201   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1212 21:19:30.353201   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:19:30.353201   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1212 21:19:30.359212   11500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.359212   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 21:19:30.360235   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1212 21:19:30.482205   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1212 21:19:30.482205   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 21:19:30.482205   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1212 21:19:30.482205   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1212 21:19:30.484225   11500 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:19:30.485208   11500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:30.485208   11500 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.490204   11500 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:19:30.691207   11500 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:19:30.696204   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:19:30.746217   11500 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 21:19:30.747215   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1212 21:19:30.888209   11500 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 21:19:30.888209   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1212 21:19:31.077210   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1212 21:19:31.252229   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 21:19:31.252229   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1212 21:19:36.407728   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (5.1554171s)
	I1212 21:19:36.407775   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1212 21:19:36.407775   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 21:19:36.407775   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1212 21:19:39.507114   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (3.0992898s)
	I1212 21:19:39.507114   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1212 21:19:39.507114   11500 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 21:19:39.507114   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1212 21:19:42.585494   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.0783311s)
	I1212 21:19:42.585494   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1212 21:19:42.585494   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 21:19:42.585494   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1212 21:19:46.136910   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (3.5513599s)
	I1212 21:19:46.136910   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1212 21:19:46.136910   11500 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:19:46.136910   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1212 21:19:47.139819   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.002894s)
	I1212 21:19:47.139819   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1212 21:19:47.139819   11500 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 21:19:47.139819   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1212 21:19:48.789102   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.6492564s)
	I1212 21:19:48.789102   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1212 21:19:48.789102   11500 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1212 21:19:48.789102   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1212 21:19:50.257704   11500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.4685787s)
	I1212 21:19:50.257704   11500 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1212 21:19:50.257704   11500 cache_images.go:125] Successfully loaded all cached images
	I1212 21:19:50.257704   11500 cache_images.go:94] duration metric: took 20.8489565s to LoadCachedImages
	I1212 21:19:50.257704   11500 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:19:50.257704   11500 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:19:50.261603   11500 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:19:50.344352   11500 cni.go:84] Creating CNI manager for ""
	I1212 21:19:50.344408   11500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:19:50.344408   11500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:19:50.344408   11500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-285600 NodeName:no-preload-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:19:50.344408   11500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-285600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:19:50.348361   11500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:19:50.364988   11500 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 21:19:50.369563   11500 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1212 21:19:50.385605   11500 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1212 21:19:51.493414   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 21:19:51.504639   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 21:19:51.504639   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 21:19:51.558972   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:19:51.635221   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 21:19:51.681233   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 21:19:51.681233   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 21:19:51.756818   11500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 21:19:51.834819   11500 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 21:19:51.834819   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 21:19:53.477870   11500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:19:53.490869   11500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1212 21:19:53.509873   11500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:19:53.530334   11500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 21:19:53.556109   11500 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:19:53.563308   11500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:19:53.582907   11500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:19:53.731002   11500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:19:53.755449   11500 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600 for IP: 192.168.121.2
	I1212 21:19:53.755449   11500 certs.go:195] generating shared ca certs ...
	I1212 21:19:53.755449   11500 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.756045   11500 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:19:53.756397   11500 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:19:53.756489   11500 certs.go:257] generating profile certs ...
	I1212 21:19:53.756900   11500 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key
	I1212 21:19:53.756998   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt with IP's: []
	I1212 21:19:53.870262   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt ...
	I1212 21:19:53.871257   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.crt: {Name:mkea463969f96c4d6685797c0f8ce6eb953748e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.872077   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key ...
	I1212 21:19:53.872077   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key: {Name:mka05d4dd201d247c11decfb29bbc83837f58b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.873212   11500 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6
	I1212 21:19:53.873212   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.121.2]
	I1212 21:19:53.914497   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 ...
	I1212 21:19:53.914497   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6: {Name:mke5db4d56520ca68011f83a06ddb400a6969701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.915547   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6 ...
	I1212 21:19:53.915547   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6: {Name:mk3ebc8c919f646cbe4ff90f62c381d3f2e2546e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.917407   11500 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt.a3b2baf6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt
	I1212 21:19:53.933194   11500 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key
	I1212 21:19:53.933780   11500 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key
	I1212 21:19:53.933780   11500 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt with IP's: []
	I1212 21:19:53.983775   11500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt ...
	I1212 21:19:53.983775   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt: {Name:mk9e2611f6249b5253a898d387ce0751b5cc75b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.984780   11500 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key ...
	I1212 21:19:53.984780   11500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key: {Name:mk5f65e5d4c3a5a658327bd443d14f8a81b45c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:19:53.999837   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:19:54.000947   11500 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:19:54.000947   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:19:54.000947   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:19:54.001537   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:19:54.001537   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:19:54.002163   11500 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:19:54.002928   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:19:54.035794   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:19:54.062576   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:19:54.093287   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:19:54.120671   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:19:54.150291   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:19:54.177911   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:19:54.212125   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:19:54.240916   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:19:54.271773   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:19:54.303492   11500 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:19:54.335863   11500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:19:54.366123   11500 ssh_runner.go:195] Run: openssl version
	I1212 21:19:54.383927   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.404179   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:19:54.423064   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.433274   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.437454   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:19:54.494300   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:54.512310   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 21:19:54.529266   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.548180   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:19:54.566859   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.577921   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.582721   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:19:54.639577   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:19:54.658152   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 21:19:54.680113   11500 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.698677   11500 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:19:54.718546   11500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.727826   11500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.733349   11500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:19:54.781025   11500 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:19:54.798882   11500 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 21:19:54.818946   11500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:19:54.830191   11500 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:19:54.830378   11500 kubeadm.go:401] StartCluster: {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:19:54.835162   11500 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:19:54.871664   11500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:19:54.888804   11500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:19:54.904866   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:19:54.909199   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:19:54.923859   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:19:54.923859   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:19:54.927851   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:19:54.940842   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:19:54.944845   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:19:54.962369   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:19:54.976883   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:19:54.981084   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:19:54.997668   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:19:55.011030   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:19:55.015161   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:19:55.032693   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:19:55.045702   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:19:55.049694   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:19:55.065693   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:19:55.178939   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:19:55.265961   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:19:55.374410   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:23:57.490599   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:23:57.490599   11500 kubeadm.go:319] 
	I1212 21:23:57.490599   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:23:57.495885   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:23:57.496001   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:23:57.497139   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:23:57.497139   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:23:57.497669   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:23:57.498271   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:23:57.499450   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:23:57.499613   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:23:57.499682   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] OS: Linux
	I1212 21:23:57.499716   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:23:57.500238   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:23:57.500863   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:23:57.501070   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:23:57.501182   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:23:57.504498   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:23:57.506311   11500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:23:57.510650   11500 out.go:252]   - Booting up control plane ...
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:23:57.511664   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000951132s
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	W1212 21:23:57.513649   11500 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:23:57.516687   11500 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:23:57.973632   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:58.000358   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:23:58.005518   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:23:58.022197   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:23:58.022197   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:23:58.026872   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:23:58.039115   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:23:58.043123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:23:58.060114   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:23:58.073122   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:23:58.076119   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:23:58.092125   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.107123   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:23:58.112123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.132133   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:23:58.145128   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:23:58.149118   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:23:58.165115   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:23:58.280707   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:23:58.378404   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:23:58.484549   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:27:59.635671   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:27:59.635671   11500 kubeadm.go:319] 
	I1212 21:27:59.636285   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:27:59.640685   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:27:59.640685   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:27:59.641210   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:27:59.641454   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:27:59.642159   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:27:59.642718   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:27:59.642918   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:27:59.643104   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:27:59.643935   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:27:59.644635   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:27:59.644733   11500 kubeadm.go:319] OS: Linux
	I1212 21:27:59.644880   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:27:59.645003   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:27:59.645114   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:27:59.645225   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:27:59.645998   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:27:59.646240   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:27:59.646401   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:27:59.649353   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:27:59.651191   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:27:59.651254   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:27:59.653668   11500 out.go:252]   - Booting up control plane ...
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:27:59.655077   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:27:59.655321   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:27:59.655492   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00060482s
	I1212 21:27:59.655492   11500 kubeadm.go:319] 
	I1212 21:27:59.655630   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:27:59.655630   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:27:59.655821   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:27:59.655821   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:403] duration metric: took 8m4.8179078s to StartCluster
	I1212 21:27:59.656041   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:27:59.659651   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:27:59.720934   11500 cri.go:89] found id: ""
	I1212 21:27:59.720934   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.720934   11500 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:27:59.720934   11500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:27:59.725183   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:27:59.766585   11500 cri.go:89] found id: ""
	I1212 21:27:59.766585   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.766585   11500 logs.go:284] No container was found matching "etcd"
	I1212 21:27:59.766585   11500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:27:59.771623   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:27:59.811981   11500 cri.go:89] found id: ""
	I1212 21:27:59.811981   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.811981   11500 logs.go:284] No container was found matching "coredns"
	I1212 21:27:59.811981   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:27:59.817402   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:27:59.863867   11500 cri.go:89] found id: ""
	I1212 21:27:59.863867   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.863867   11500 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:27:59.863867   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:27:59.874092   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:27:59.916790   11500 cri.go:89] found id: ""
	I1212 21:27:59.916790   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.916790   11500 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:27:59.916790   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:27:59.921036   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:27:59.972193   11500 cri.go:89] found id: ""
	I1212 21:27:59.972193   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.972193   11500 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:27:59.972193   11500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:27:59.976673   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:28:00.020419   11500 cri.go:89] found id: ""
	I1212 21:28:00.020419   11500 logs.go:282] 0 containers: []
	W1212 21:28:00.020419   11500 logs.go:284] No container was found matching "kindnet"
	I1212 21:28:00.020419   11500 logs.go:123] Gathering logs for container status ...
	I1212 21:28:00.020419   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:28:00.075393   11500 logs.go:123] Gathering logs for kubelet ...
	I1212 21:28:00.075393   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:28:00.136556   11500 logs.go:123] Gathering logs for dmesg ...
	I1212 21:28:00.136556   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:28:00.180601   11500 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:28:00.180601   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:28:00.264769   11500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:28:00.264769   11500 logs.go:123] Gathering logs for Docker ...
	I1212 21:28:00.264769   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:28:00.295184   11500 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:28:00.295286   11500 out.go:285] * 
	* 
	W1212 21:28:00.295361   11500 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.295361   11500 out.go:285] * 
	* 
	W1212 21:28:00.297172   11500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:28:00.306876   11500 out.go:203] 
	W1212 21:28:00.310659   11500 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.310880   11500 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:28:00.310880   11500 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:28:00.312599   11500 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:19:18.519800705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5af3f413668a0d538b65d8f61bdb8f76c9d3fffc039f5c39eab88c8e538214f8",
	            "SandboxKey": "/var/run/docker/netns/5af3f413668a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "41d46d4540a8534435610e3455fd03f86fe030069ea47ea0bc7248badc5ae81c",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
E1212 21:28:01.174836   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 6 (610.9195ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:01.387778   10676 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.1570076s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ stop    │ -p embed-certs-729900 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-729900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ start   │ -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:22:58
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:22:58.216335   14160 out.go:360] Setting OutFile to fd 1132 ...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.266331   14160 out.go:374] Setting ErrFile to fd 1508...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.280322   14160 out.go:368] Setting JSON to false
	I1212 21:22:58.283341   14160 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8716,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:22:58.283341   14160 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:22:58.287338   14160 out.go:179] * [default-k8s-diff-port-124600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:22:58.290341   14160 notify.go:221] Checking for updates...
	I1212 21:22:58.292332   14160 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:22:58.294328   14160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:22:58.296340   14160 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:22:58.298340   14160 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:22:58.301322   14160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:22:58.304323   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:58.305325   14160 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:22:58.434944   14160 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:22:58.438949   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.676253   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:58.655092827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.680239   14160 out.go:179] * Using the docker driver based on existing profile
	I1212 21:22:58.682239   14160 start.go:309] selected driver: docker
	I1212 21:22:58.682239   14160 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.682239   14160 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:22:58.732240   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.965241   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:100 SystemTime:2025-12-12 21:22:58.948719453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.966243   14160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:22:58.966243   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:22:58.966243   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:58.966243   14160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.968243   14160 out.go:179] * Starting "default-k8s-diff-port-124600" primary control-plane node in "default-k8s-diff-port-124600" cluster
	I1212 21:22:58.972244   14160 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:22:58.974236   14160 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:22:58.977243   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:22:58.977243   14160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:22:58.977243   14160 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1212 21:22:58.977243   14160 cache.go:65] Caching tarball of preloaded images
	I1212 21:22:58.977243   14160 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:22:58.978245   14160 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1212 21:22:58.978245   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.059257   14160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:22:59.059257   14160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:22:59.059257   14160 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:22:59.059257   14160 start.go:360] acquireMachinesLock for default-k8s-diff-port-124600: {Name:mk780a32308b64368d3930722f9e881df08c3504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:22:59.059257   14160 start.go:364] duration metric: took 0s to acquireMachinesLock for "default-k8s-diff-port-124600"
	I1212 21:22:59.059257   14160 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:22:59.059257   14160 fix.go:54] fixHost starting: 
	I1212 21:22:59.066252   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.129461   14160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124600: state=Stopped err=<nil>
	W1212 21:22:59.129461   14160 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:22:59.133088   14160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124600" ...
	I1212 21:22:59.136686   14160 cli_runner.go:164] Run: docker start default-k8s-diff-port-124600
	I1212 21:22:59.862889   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.919149   14160 kic.go:430] container "default-k8s-diff-port-124600" state is running.
	I1212 21:22:59.924156   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:22:59.977149   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.979157   14160 machine.go:94] provisionDockerMachine start ...
	I1212 21:22:59.982162   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:00.038158   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:00.038158   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:00.038158   14160 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:23:00.040164   14160 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:23:03.234044   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.234044   14160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124600"
	I1212 21:23:03.237963   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.294306   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.294306   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.294306   14160 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124600 && echo "default-k8s-diff-port-124600" | sudo tee /etc/hostname
	I1212 21:23:03.491471   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.495244   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.552274   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.552715   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.552715   14160 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124600/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:23:03.726759   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:03.726759   14160 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:23:03.726759   14160 ubuntu.go:190] setting up certificates
	I1212 21:23:03.726759   14160 provision.go:84] configureAuth start
	I1212 21:23:03.730596   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:03.786827   14160 provision.go:143] copyHostCerts
	I1212 21:23:03.787473   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:23:03.787473   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:23:03.787473   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:23:03.788324   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:23:03.788324   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:23:03.788845   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:23:03.789576   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:23:03.789576   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:23:03.789576   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:23:03.790404   14160 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.default-k8s-diff-port-124600 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-124600 localhost minikube]
	I1212 21:23:04.028472   14160 provision.go:177] copyRemoteCerts
	I1212 21:23:04.032783   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:23:04.035720   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.090685   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:04.220108   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:23:04.251841   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 21:23:04.283040   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:23:04.313548   14160 provision.go:87] duration metric: took 586.7803ms to configureAuth
	I1212 21:23:04.313548   14160 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:23:04.313548   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:04.319686   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.374458   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.375110   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.375110   14160 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:23:04.546890   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:23:04.546890   14160 ubuntu.go:71] root file system type: overlay
	I1212 21:23:04.546890   14160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:23:04.551279   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.607300   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.607818   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.607929   14160 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:23:04.799190   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:23:04.802868   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.862025   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.862025   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.862025   14160 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:23:05.043356   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:05.043406   14160 machine.go:97] duration metric: took 5.0641684s to provisionDockerMachine
	I1212 21:23:05.043449   14160 start.go:293] postStartSetup for "default-k8s-diff-port-124600" (driver="docker")
	I1212 21:23:05.043449   14160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:23:05.047805   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:23:05.051418   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.110898   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.255814   14160 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:23:05.264052   14160 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:23:05.264052   14160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:23:05.264052   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:23:05.264766   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:23:05.265608   14160 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:23:05.270881   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:23:05.288263   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:23:05.316332   14160 start.go:296] duration metric: took 272.8783ms for postStartSetup
	I1212 21:23:05.320908   14160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:23:05.324174   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.375311   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.511900   14160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:23:05.522084   14160 fix.go:56] duration metric: took 6.4622006s for fixHost
	I1212 21:23:05.522084   14160 start.go:83] releasing machines lock for "default-k8s-diff-port-124600", held for 6.4627242s
	I1212 21:23:05.525524   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:05.580943   14160 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:23:05.584825   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.585512   14160 ssh_runner.go:195] Run: cat /version.json
	I1212 21:23:05.589557   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.645453   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.647465   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	W1212 21:23:05.764866   14160 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:23:05.777208   14160 ssh_runner.go:195] Run: systemctl --version
	I1212 21:23:05.795091   14160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:23:05.805053   14160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:23:05.808995   14160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:23:05.822377   14160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:23:05.822377   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:05.822377   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:05.822377   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:05.850571   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:23:05.860918   14160 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:23:05.860962   14160 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:23:05.870950   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:23:05.886032   14160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:23:05.890300   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:23:05.911690   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.931881   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:23:05.951355   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.972217   14160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:23:05.989654   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:23:06.008555   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:23:06.029580   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:23:06.051557   14160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:23:06.068272   14160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:23:06.088555   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:06.232851   14160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:23:06.395580   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:06.396135   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:06.401664   14160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:23:06.427774   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.449987   14160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:23:06.530054   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.552557   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:23:06.573212   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:06.601206   14160 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:23:06.613316   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:23:06.629256   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:23:06.655736   14160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:23:06.808191   14160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:23:06.948697   14160 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:23:06.949225   14160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:23:06.973857   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:23:06.995178   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:07.159801   14160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:23:08.387280   14160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2274602s)
	I1212 21:23:08.392059   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:23:08.414696   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:23:08.439024   14160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:23:08.465914   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:08.488326   14160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:23:08.636890   14160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:23:08.775314   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:08.926196   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:23:08.950709   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:23:08.974437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:09.109676   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:23:09.227758   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:09.246593   14160 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:23:09.251694   14160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:23:09.259250   14160 start.go:564] Will wait 60s for crictl version
	I1212 21:23:09.263473   14160 ssh_runner.go:195] Run: which crictl
	I1212 21:23:09.274454   14160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:23:09.319908   14160 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:23:09.323619   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.371068   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.415300   14160 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1212 21:23:09.420229   14160 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-124600 dig +short host.docker.internal
	I1212 21:23:09.561538   14160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:23:09.566410   14160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:23:09.573305   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.594186   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:09.649016   14160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:23:09.649995   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:23:09.652859   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.686348   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.686348   14160 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:23:09.689834   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.722637   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.722717   14160 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:23:09.722717   14160 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 docker true true} ...
	I1212 21:23:09.722968   14160 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-124600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:23:09.726467   14160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:23:09.804166   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:23:09.804166   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:23:09.804166   14160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:23:09.804166   14160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124600 NodeName:default-k8s-diff-port-124600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:23:09.804776   14160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-124600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:23:09.809184   14160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:23:09.822880   14160 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:23:09.827517   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:23:09.843159   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1212 21:23:09.865173   14160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:23:09.883664   14160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 21:23:09.910110   14160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:23:09.917548   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.936437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:10.076798   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:10.099969   14160 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600 for IP: 192.168.76.2
	I1212 21:23:10.099969   14160 certs.go:195] generating shared ca certs ...
	I1212 21:23:10.099969   14160 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:23:10.100633   14160 certs.go:257] generating profile certs ...
	I1212 21:23:10.101754   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\client.key
	I1212 21:23:10.102187   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key.c1ba716d
	I1212 21:23:10.102537   14160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:23:10.103938   14160 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:23:10.104497   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:23:10.104785   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:23:10.105145   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:23:10.105904   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:23:10.107597   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:23:10.138041   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:23:10.169285   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:23:10.199761   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:23:10.228706   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:23:10.259268   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:23:10.319083   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:23:10.408082   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:23:10.504827   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:23:10.535027   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:23:10.606848   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:23:10.641191   14160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:23:10.699040   14160 ssh_runner.go:195] Run: openssl version
	I1212 21:23:10.713021   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.729389   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:23:10.746196   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.754411   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.759187   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.807227   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:23:10.824046   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.841672   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:23:10.866000   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.875373   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.880699   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.937889   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:23:10.955118   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:23:10.975153   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:23:10.995392   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.003494   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.008922   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.057570   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:23:11.076453   14160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:23:11.089632   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:23:11.142247   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:23:11.218728   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:23:11.416273   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:23:11.544319   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:23:11.636634   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:23:11.685985   14160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:23:11.690036   14160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:23:11.724509   14160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:23:11.737581   14160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:23:11.737638   14160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:23:11.743506   14160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:23:11.757047   14160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:23:11.761811   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.815778   14160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.816493   14160 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124600" cluster setting kubeconfig missing "default-k8s-diff-port-124600" context setting]
	I1212 21:23:11.816493   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.838352   14160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:23:11.855027   14160 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:23:11.855027   14160 kubeadm.go:602] duration metric: took 117.3468ms to restartPrimaryControlPlane
	I1212 21:23:11.855027   14160 kubeadm.go:403] duration metric: took 169.0394ms to StartCluster
	I1212 21:23:11.855027   14160 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.855027   14160 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.856184   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.856963   14160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:23:11.856963   14160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:23:11.856963   14160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:11.856963   14160 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124600"
	W1212 21:23:11.857487   14160 addons.go:248] addon metrics-server should already be in state true
	I1212 21:23:11.857567   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.857598   14160 addons.go:248] addon storage-provisioner should already be in state true
	W1212 21:23:11.857598   14160 addons.go:248] addon dashboard should already be in state true
	I1212 21:23:11.857767   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.857819   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.863101   14160 out.go:179] * Verifying Kubernetes components...
	I1212 21:23:11.866976   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.868177   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870310   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870461   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.871764   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:11.932064   14160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:23:11.934081   14160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:23:11.942073   14160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:11.942073   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:23:11.944073   14160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:23:11.945075   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.947064   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:23:11.947064   14160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:23:11.951064   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.953072   14160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.953072   14160 addons.go:248] addon default-storageclass should already be in state true
	I1212 21:23:11.953072   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.962072   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.977069   14160 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:23:11.983067   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:23:11.983067   14160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:23:11.988070   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.005074   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.009067   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.019066   14160 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.019066   14160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:23:12.022067   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.046070   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.073066   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.092898   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:12.116354   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.165367   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:23:12.165367   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:23:12.167359   14160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:12.169365   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:12.186351   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:23:12.186351   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:23:12.204423   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:23:12.204423   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:23:12.207045   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:23:12.207045   14160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:23:12.230521   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:23:12.230521   14160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:23:12.231517   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:23:12.231517   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:23:12.233521   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1212 21:23:12.386842   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.386922   14160 retry.go:31] will retry after 278.156141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.390854   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:23:12.390854   14160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:23:12.400109   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.415390   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:23:12.415480   14160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:23:12.491717   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:23:12.491717   14160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:23:12.492530   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.492530   14160 retry.go:31] will retry after 256.197463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.512893   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:12.512893   14160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:23:12.538803   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:12.551683   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.551683   14160 retry.go:31] will retry after 265.384209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:12.644080   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.644080   14160 retry.go:31] will retry after 354.535598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.669419   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:12.752922   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.752922   14160 retry.go:31] will retry after 290.803282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.753921   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.823384   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1212 21:23:12.917382   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.917460   14160 retry.go:31] will retry after 300.691587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.004960   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:13.048937   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:13.093941   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.094016   14160 retry.go:31] will retry after 506.158576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.223508   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387360   14160 retry.go:31] will retry after 272.283438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387397   14160 retry.go:31] will retry after 368.00551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.607806   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:13.665164   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:13.697618   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.698562   14160 retry.go:31] will retry after 669.122462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.760538   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:14.372987   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:17.195846   14160 node_ready.go:49] node "default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:17.195955   14160 node_ready.go:38] duration metric: took 5.028515s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:17.195955   14160 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:23:17.200813   14160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:23:20.596132   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.9882139s)
	I1212 21:23:20.596672   14160 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.3422701s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.2468976s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.8066778s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.634458s)
	I1212 21:23:21.007551   14160 api_server.go:72] duration metric: took 9.150442s to wait for apiserver process to appear ...
	I1212 21:23:21.007551   14160 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:23:21.007551   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.010167   14160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124600 addons enable metrics-server
	
	I1212 21:23:21.099582   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.100442   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:21.196982   14160 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1212 21:23:21.200574   14160 addons.go:530] duration metric: took 9.3434618s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1212 21:23:21.508465   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.591838   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.591838   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.008494   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.019209   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:22.019209   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.507999   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.600220   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 200:
	ok
	I1212 21:23:22.604091   14160 api_server.go:141] control plane version: v1.34.2
	I1212 21:23:22.604864   14160 api_server.go:131] duration metric: took 1.5972868s to wait for apiserver health ...
	I1212 21:23:22.604864   14160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:23:22.612203   14160 system_pods.go:59] 8 kube-system pods found
	I1212 21:23:22.612251   14160 system_pods.go:61] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.612251   14160 system_pods.go:61] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.612251   14160 system_pods.go:61] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.612251   14160 system_pods.go:61] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.612251   14160 system_pods.go:74] duration metric: took 7.3871ms to wait for pod list to return data ...
	I1212 21:23:22.612251   14160 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:23:22.616756   14160 default_sa.go:45] found service account: "default"
	I1212 21:23:22.616756   14160 default_sa.go:55] duration metric: took 4.5056ms for default service account to be created ...
	I1212 21:23:22.616756   14160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:23:22.695042   14160 system_pods.go:86] 8 kube-system pods found
	I1212 21:23:22.695042   14160 system_pods.go:89] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.695105   14160 system_pods.go:89] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.695105   14160 system_pods.go:89] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.695168   14160 system_pods.go:89] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.695168   14160 system_pods.go:126] duration metric: took 78.4107ms to wait for k8s-apps to be running ...
	I1212 21:23:22.695198   14160 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:23:22.700468   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:22.725193   14160 system_svc.go:56] duration metric: took 29.0191ms WaitForService to wait for kubelet
	I1212 21:23:22.725193   14160 kubeadm.go:587] duration metric: took 10.868056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:23:22.725193   14160 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:23:22.732161   14160 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1212 21:23:22.732201   14160 node_conditions.go:123] node cpu capacity is 16
	I1212 21:23:22.732201   14160 node_conditions.go:105] duration metric: took 7.0085ms to run NodePressure ...
	I1212 21:23:22.732201   14160 start.go:242] waiting for startup goroutines ...
	I1212 21:23:22.732201   14160 start.go:247] waiting for cluster config update ...
	I1212 21:23:22.732201   14160 start.go:256] writing updated cluster config ...
	I1212 21:23:22.737899   14160 ssh_runner.go:195] Run: rm -f paused
	I1212 21:23:22.745044   14160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:22.751178   14160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:23:24.761658   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:26.763298   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:29.260393   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:31.262454   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:33.762195   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:35.762487   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:39.113069   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:41.263268   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:43.269341   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	I1212 21:23:44.762609   14160 pod_ready.go:94] pod "coredns-66bc5c9577-r7gwt" is "Ready"
	I1212 21:23:44.762609   14160 pod_ready.go:86] duration metric: took 22.0110788s for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.767351   14160 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.774353   14160 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.774353   14160 pod_ready.go:86] duration metric: took 7.0013ms for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.779541   14160 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.786861   14160 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.786861   14160 pod_ready.go:86] duration metric: took 7.3192ms for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.790455   14160 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.958511   14160 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.958599   14160 pod_ready.go:86] duration metric: took 168.1411ms for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.158399   14160 pod_ready.go:83] waiting for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.557624   14160 pod_ready.go:94] pod "kube-proxy-2pvfg" is "Ready"
	I1212 21:23:45.557624   14160 pod_ready.go:86] duration metric: took 399.2187ms for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.758026   14160 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.157650   14160 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:46.158249   14160 pod_ready.go:86] duration metric: took 400.1515ms for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.158249   14160 pod_ready.go:40] duration metric: took 23.4127353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:46.259466   14160 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 21:23:46.263937   14160 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124600" cluster and "default" namespace by default
	I1212 21:23:57.490599   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:23:57.490599   11500 kubeadm.go:319] 
	I1212 21:23:57.490599   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:23:57.495885   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:23:57.496001   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:23:57.497139   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:23:57.497139   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:23:57.497669   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:23:57.498271   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:23:57.499450   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:23:57.499613   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:23:57.499682   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] OS: Linux
	I1212 21:23:57.499716   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:23:57.500238   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:23:57.500863   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:23:57.501070   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:23:57.501182   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:23:57.504498   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:23:57.506311   11500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:23:57.510650   11500 out.go:252]   - Booting up control plane ...
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:23:57.511664   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000951132s
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	W1212 21:23:57.513649   11500 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:23:57.516687   11500 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:23:57.973632   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:58.000358   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:23:58.005518   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:23:58.022197   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:23:58.022197   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:23:58.026872   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:23:58.039115   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:23:58.043123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:23:58.060114   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:23:58.073122   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:23:58.076119   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:23:58.092125   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.107123   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:23:58.112123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.132133   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:23:58.145128   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:23:58.149118   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:23:58.165115   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:23:58.280707   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:23:58.378404   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:23:58.484549   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:26:50.572138    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:26:50.572138    3280 kubeadm.go:319] 
	I1212 21:26:50.572138    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:26:50.576372    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:26:50.576562    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:26:50.576743    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:26:50.576743    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:26:50.577278    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:26:50.578180    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:26:50.578753    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:26:50.578857    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:26:50.579009    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:26:50.579109    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:26:50.579235    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:26:50.579500    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:26:50.579604    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:26:50.579832    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:26:50.579931    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] OS: Linux
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:26:50.580562    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:26:50.580709    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:26:50.580788    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:26:50.580931    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:26:50.581495    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:26:50.581626    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:26:50.585055    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:26:50.586227    3280 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586357    3280 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:26:50.587005    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:26:50.587734    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:26:50.587927    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:26:50.590646    3280 out.go:252]   - Booting up control plane ...
	I1212 21:26:50.591259    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:26:50.592415    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001153116s
	I1212 21:26:50.592415    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	W1212 21:26:50.593382    3280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:26:50.597384    3280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:26:51.058393    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:26:51.077528    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:26:51.081780    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:26:51.095285    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:26:51.095342    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:26:51.100877    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:26:51.114399    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:26:51.119274    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:26:51.137891    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:26:51.152853    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:26:51.157180    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:26:51.176783    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.190524    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:26:51.194597    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.212488    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:26:51.228065    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:26:51.232039    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:26:51.250057    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:26:51.372297    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:26:51.461499    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:26:51.553708    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:27:59.635671   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:27:59.635671   11500 kubeadm.go:319] 
	I1212 21:27:59.636285   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:27:59.640685   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:27:59.640685   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:27:59.641210   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:27:59.641454   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:27:59.642159   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:27:59.642718   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:27:59.642918   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:27:59.643104   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:27:59.643935   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:27:59.644635   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:27:59.644733   11500 kubeadm.go:319] OS: Linux
	I1212 21:27:59.644880   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:27:59.645003   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:27:59.645114   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:27:59.645225   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:27:59.645998   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:27:59.646240   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:27:59.646401   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:27:59.649353   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:27:59.651191   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:27:59.651254   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:27:59.653668   11500 out.go:252]   - Booting up control plane ...
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:27:59.655077   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:27:59.655321   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:27:59.655492   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00060482s
	I1212 21:27:59.655492   11500 kubeadm.go:319] 
	I1212 21:27:59.655630   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:27:59.655630   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:27:59.655821   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:27:59.655821   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:403] duration metric: took 8m4.8179078s to StartCluster
	I1212 21:27:59.656041   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:27:59.659651   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:27:59.720934   11500 cri.go:89] found id: ""
	I1212 21:27:59.720934   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.720934   11500 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:27:59.720934   11500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:27:59.725183   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:27:59.766585   11500 cri.go:89] found id: ""
	I1212 21:27:59.766585   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.766585   11500 logs.go:284] No container was found matching "etcd"
	I1212 21:27:59.766585   11500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:27:59.771623   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:27:59.811981   11500 cri.go:89] found id: ""
	I1212 21:27:59.811981   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.811981   11500 logs.go:284] No container was found matching "coredns"
	I1212 21:27:59.811981   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:27:59.817402   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:27:59.863867   11500 cri.go:89] found id: ""
	I1212 21:27:59.863867   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.863867   11500 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:27:59.863867   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:27:59.874092   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:27:59.916790   11500 cri.go:89] found id: ""
	I1212 21:27:59.916790   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.916790   11500 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:27:59.916790   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:27:59.921036   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:27:59.972193   11500 cri.go:89] found id: ""
	I1212 21:27:59.972193   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.972193   11500 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:27:59.972193   11500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:27:59.976673   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:28:00.020419   11500 cri.go:89] found id: ""
	I1212 21:28:00.020419   11500 logs.go:282] 0 containers: []
	W1212 21:28:00.020419   11500 logs.go:284] No container was found matching "kindnet"
	I1212 21:28:00.020419   11500 logs.go:123] Gathering logs for container status ...
	I1212 21:28:00.020419   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:28:00.075393   11500 logs.go:123] Gathering logs for kubelet ...
	I1212 21:28:00.075393   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:28:00.136556   11500 logs.go:123] Gathering logs for dmesg ...
	I1212 21:28:00.136556   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:28:00.180601   11500 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:28:00.180601   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:28:00.264769   11500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:28:00.264769   11500 logs.go:123] Gathering logs for Docker ...
	I1212 21:28:00.264769   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:28:00.295184   11500 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:28:00.295286   11500 out.go:285] * 
	W1212 21:28:00.295361   11500 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.295361   11500 out.go:285] * 
	W1212 21:28:00.297172   11500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:28:00.306876   11500 out.go:203] 
	W1212 21:28:00.310659   11500 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.310880   11500 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:28:00.310880   11500 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:28:00.312599   11500 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896422880Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896514789Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896525790Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896530891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896538492Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896562994Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896607799Z" level=info msg="Initializing buildkit"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.063364015Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070100507Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070204618Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070271524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070381736Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:02.435695   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:02.436710   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:02.439173   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:02.440569   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:02.441500   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 21:23] CPU: 13 PID: 434005 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f45063b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f45063b9af6.
	[  +0.000001] RSP: 002b:00007fffb2f7a7b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.884221] CPU: 10 PID: 434152 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1ab5b6bb20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f1ab5b6baf6.
	[  +0.000001] RSP: 002b:00007fffe51bbd80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +3.005046] tmpfs: Unknown parameter 'noswap'
	[Dec12 21:24] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:28:02 up  2:29,  0 user,  load average: 0.64, 2.64, 3.69
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:27:59 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:27:59 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 21:27:59 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:27:59 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:27:59 no-preload-285600 kubelet[10769]: E1212 21:27:59.888009   10769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:27:59 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:27:59 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:00 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 21:28:00 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:00 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:00 no-preload-285600 kubelet[10857]: E1212 21:28:00.671682   10857 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:00 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:00 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:01 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 21:28:01 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:01 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:01 no-preload-285600 kubelet[10881]: E1212 21:28:01.396781   10881 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:01 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:01 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:02 no-preload-285600 kubelet[10915]: E1212 21:28:02.131313   10915 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 6 (592.5945ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:03.542088    4496 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (531.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (518.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1212 21:22:31.918357   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m35.9814474s)

                                                
                                                
-- stdout --
	* [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:22:16.941108    3280 out.go:360] Setting OutFile to fd 1692 ...
	I1212 21:22:16.990064    3280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:16.990064    3280 out.go:374] Setting ErrFile to fd 1704...
	I1212 21:22:16.990064    3280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:17.007209    3280 out.go:368] Setting JSON to false
	I1212 21:22:17.010257    3280 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8674,"bootTime":1765565862,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:22:17.010257    3280 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:22:17.078805    3280 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:22:17.085462    3280 notify.go:221] Checking for updates...
	I1212 21:22:17.086063    3280 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:22:17.088150    3280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:22:17.089476    3280 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:22:17.100801    3280 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:22:17.105769    3280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:22:17.110918    3280 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:17.110918    3280 config.go:182] Loaded profile config "embed-certs-729900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:17.111560    3280 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:22:17.111560    3280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:22:17.231752    3280 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:22:17.235308    3280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:17.499655    3280 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:17.481436178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:17.502662    3280 out.go:179] * Using the docker driver based on user configuration
	I1212 21:22:17.506651    3280 start.go:309] selected driver: docker
	I1212 21:22:17.506651    3280 start.go:927] validating driver "docker" against <nil>
	I1212 21:22:17.506651    3280 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:22:17.550282    3280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:17.814894    3280 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:17.797375606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:17.814894    3280 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 21:22:17.814894    3280 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 21:22:17.815884    3280 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:22:17.819912    3280 out.go:179] * Using Docker Desktop driver with root privileges
	I1212 21:22:17.822895    3280 cni.go:84] Creating CNI manager for ""
	I1212 21:22:17.822895    3280 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:17.822895    3280 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 21:22:17.822895    3280 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:17.826888    3280 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:22:17.828883    3280 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:22:17.831883    3280 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:22:17.833901    3280 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:22:17.833901    3280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:22:17.834906    3280 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:22:17.834906    3280 cache.go:65] Caching tarball of preloaded images
	I1212 21:22:17.834906    3280 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:22:17.834906    3280 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:22:17.834906    3280 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:22:17.835883    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json: {Name:mkd7f35449b0d65726d9696f4202cd2394f99a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:17.902889    3280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:22:17.902889    3280 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:22:17.902889    3280 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:22:17.902889    3280 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:22:17.903889    3280 start.go:364] duration metric: took 999.5µs to acquireMachinesLock for "newest-cni-449900"
	I1212 21:22:17.903889    3280 start.go:93] Provisioning new machine with config: &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:22:17.903889    3280 start.go:125] createHost starting for "" (driver="docker")
	I1212 21:22:17.908337    3280 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 21:22:17.908407    3280 start.go:159] libmachine.API.Create for "newest-cni-449900" (driver="docker")
	I1212 21:22:17.908407    3280 client.go:173] LocalClient.Create starting
	I1212 21:22:17.909040    3280 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1212 21:22:17.909173    3280 main.go:143] libmachine: Decoding PEM data...
	I1212 21:22:17.909173    3280 main.go:143] libmachine: Parsing certificate...
	I1212 21:22:17.909173    3280 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1212 21:22:17.909173    3280 main.go:143] libmachine: Decoding PEM data...
	I1212 21:22:17.909173    3280 main.go:143] libmachine: Parsing certificate...
	I1212 21:22:17.914768    3280 cli_runner.go:164] Run: docker network inspect newest-cni-449900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 21:22:17.966231    3280 cli_runner.go:211] docker network inspect newest-cni-449900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 21:22:17.970589    3280 network_create.go:284] running [docker network inspect newest-cni-449900] to gather additional debugging logs...
	I1212 21:22:17.970589    3280 cli_runner.go:164] Run: docker network inspect newest-cni-449900
	W1212 21:22:18.023006    3280 cli_runner.go:211] docker network inspect newest-cni-449900 returned with exit code 1
	I1212 21:22:18.023006    3280 network_create.go:287] error running [docker network inspect newest-cni-449900]: docker network inspect newest-cni-449900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-449900 not found
	I1212 21:22:18.023006    3280 network_create.go:289] output of [docker network inspect newest-cni-449900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-449900 not found
	
	** /stderr **
	I1212 21:22:18.026005    3280 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:22:18.104611    3280 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:22:18.135669    3280 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:22:18.150943    3280 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:22:18.181286    3280 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 21:22:18.195358    3280 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00180d0b0}
	I1212 21:22:18.195358    3280 network_create.go:124] attempt to create docker network newest-cni-449900 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 21:22:18.198612    3280 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-449900 newest-cni-449900
	I1212 21:22:18.351674    3280 network_create.go:108] docker network newest-cni-449900 192.168.85.0/24 created
	I1212 21:22:18.351674    3280 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-449900" container
	I1212 21:22:18.360733    3280 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 21:22:18.426803    3280 cli_runner.go:164] Run: docker volume create newest-cni-449900 --label name.minikube.sigs.k8s.io=newest-cni-449900 --label created_by.minikube.sigs.k8s.io=true
	I1212 21:22:18.497802    3280 oci.go:103] Successfully created a docker volume newest-cni-449900
	I1212 21:22:18.502833    3280 cli_runner.go:164] Run: docker run --rm --name newest-cni-449900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-449900 --entrypoint /usr/bin/test -v newest-cni-449900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 21:22:19.811445    3280 cli_runner.go:217] Completed: docker run --rm --name newest-cni-449900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-449900 --entrypoint /usr/bin/test -v newest-cni-449900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.3085917s)
	I1212 21:22:19.811445    3280 oci.go:107] Successfully prepared a docker volume newest-cni-449900
	I1212 21:22:19.811445    3280 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:22:19.811445    3280 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 21:22:19.816448    3280 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-449900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 21:22:34.575469    3280 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-449900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (14.7587862s)
	I1212 21:22:34.575527    3280 kic.go:203] duration metric: took 14.7638472s to extract preloaded images to volume ...
	I1212 21:22:34.581099    3280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:34.863395    3280 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:34.835543492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:34.868391    3280 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 21:22:35.144219    3280 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-449900 --name newest-cni-449900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-449900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-449900 --network newest-cni-449900 --ip 192.168.85.2 --volume newest-cni-449900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 21:22:35.792880    3280 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Running}}
	I1212 21:22:35.865502    3280 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:22:35.925512    3280 cli_runner.go:164] Run: docker exec newest-cni-449900 stat /var/lib/dpkg/alternatives/iptables
	I1212 21:22:36.033965    3280 oci.go:144] the created container "newest-cni-449900" has a running status.
	I1212 21:22:36.034964    3280 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa...
	I1212 21:22:36.229251    3280 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 21:22:36.313730    3280 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:22:36.380714    3280 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 21:22:36.380714    3280 kic_runner.go:114] Args: [docker exec --privileged newest-cni-449900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 21:22:36.513307    3280 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa...
	I1212 21:22:38.678819    3280 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:22:38.734146    3280 machine.go:94] provisionDockerMachine start ...
	I1212 21:22:38.737531    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:38.796352    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:38.810392    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:38.810392    3280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:22:38.996174    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:22:38.996174    3280 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:22:39.002848    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:39.057348    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:39.057348    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:39.058349    3280 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	I1212 21:22:39.251300    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:22:39.255037    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:39.314330    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:39.315070    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:39.315147    3280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:22:39.488244    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:22:39.488244    3280 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:22:39.488244    3280 ubuntu.go:190] setting up certificates
	I1212 21:22:39.488244    3280 provision.go:84] configureAuth start
	I1212 21:22:39.493040    3280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:22:39.553452    3280 provision.go:143] copyHostCerts
	I1212 21:22:39.553452    3280 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:22:39.553452    3280 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:22:39.554451    3280 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:22:39.554451    3280 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:22:39.554451    3280 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:22:39.555477    3280 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:22:39.555477    3280 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:22:39.555477    3280 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:22:39.556479    3280 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:22:39.556479    3280 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:22:39.646453    3280 provision.go:177] copyRemoteCerts
	I1212 21:22:39.650471    3280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:22:39.653469    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:39.708690    3280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:22:39.833789    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:22:39.871536    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:22:39.897935    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:22:39.923939    3280 provision.go:87] duration metric: took 434.6753ms to configureAuth
	I1212 21:22:39.923939    3280 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:22:39.923939    3280 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:22:39.927934    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:39.980932    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:39.980932    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:39.980932    3280 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:22:40.144814    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:22:40.145340    3280 ubuntu.go:71] root file system type: overlay
	I1212 21:22:40.145534    3280 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:22:40.149654    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:40.208479    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:40.208479    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:40.208479    3280 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:22:40.402170    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:22:40.405422    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:40.459717    3280 main.go:143] libmachine: Using SSH client type: native
	I1212 21:22:40.460398    3280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1212 21:22:40.460433    3280 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:22:41.897913    3280 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-12 21:22:40.387993816 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 21:22:41.897995    3280 machine.go:97] duration metric: took 3.1637989s to provisionDockerMachine
	I1212 21:22:41.897995    3280 client.go:176] duration metric: took 23.9892062s to LocalClient.Create
	I1212 21:22:41.897995    3280 start.go:167] duration metric: took 23.9892062s to libmachine.API.Create "newest-cni-449900"
	I1212 21:22:41.897995    3280 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:22:41.897995    3280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:22:41.902731    3280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:22:41.905506    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:41.960720    3280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:22:42.099047    3280 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:22:42.108768    3280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:22:42.108768    3280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:22:42.108768    3280 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:22:42.109314    3280 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:22:42.110305    3280 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:22:42.117077    3280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:22:42.133563    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:22:42.163306    3280 start.go:296] duration metric: took 265.2625ms for postStartSetup
	I1212 21:22:42.169903    3280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:22:42.224021    3280 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:22:42.230721    3280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:22:42.233391    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:42.285679    3280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:22:42.423890    3280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:22:42.432154    3280 start.go:128] duration metric: took 24.5278752s to createHost
	I1212 21:22:42.432154    3280 start.go:83] releasing machines lock for "newest-cni-449900", held for 24.5278752s
	I1212 21:22:42.438011    3280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:22:42.491103    3280 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:22:42.495902    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:42.499402    3280 ssh_runner.go:195] Run: cat /version.json
	I1212 21:22:42.503040    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:42.557373    3280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:22:42.559384    3280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:22:42.681639    3280 ssh_runner.go:195] Run: systemctl --version
	W1212 21:22:42.683789    3280 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:22:42.698728    3280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:22:42.708096    3280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:22:42.712558    3280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:22:42.760610    3280 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:22:42.760610    3280 start.go:496] detecting cgroup driver to use...
	I1212 21:22:42.760610    3280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:22:42.760610    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:22:42.789089    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:22:42.791483    3280 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:22:42.791483    3280 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:22:42.808821    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:22:42.825708    3280 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:22:42.832204    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:22:42.859214    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:22:42.879753    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:22:42.901081    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:22:42.920434    3280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:22:42.940757    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:22:42.961226    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:22:42.983198    3280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:22:43.006834    3280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:22:43.024863    3280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:22:43.046143    3280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:22:43.193674    3280 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:22:43.356952    3280 start.go:496] detecting cgroup driver to use...
	I1212 21:22:43.356952    3280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:22:43.361665    3280 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:22:43.393922    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:22:43.418453    3280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:22:43.489327    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:22:43.518142    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:22:43.541083    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:22:43.569217    3280 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:22:43.580392    3280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:22:43.592535    3280 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:22:43.617546    3280 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:22:43.763474    3280 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:22:43.906466    3280 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:22:43.906466    3280 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:22:43.930455    3280 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:22:43.952455    3280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:22:44.100844    3280 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:22:45.045609    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:22:45.068225    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:22:45.090221    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:22:45.113234    3280 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:22:45.256418    3280 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:22:45.426474    3280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:22:45.601274    3280 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:22:45.634139    3280 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:22:45.656135    3280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:22:45.806798    3280 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:22:45.912400    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:22:45.931928    3280 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:22:45.937524    3280 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:22:45.944824    3280 start.go:564] Will wait 60s for crictl version
	I1212 21:22:45.948802    3280 ssh_runner.go:195] Run: which crictl
	I1212 21:22:45.958871    3280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:22:46.019696    3280 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:22:46.022712    3280 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:22:46.064693    3280 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:22:46.113113    3280 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:22:46.117509    3280 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:22:46.441407    3280 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:22:46.446164    3280 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:22:46.453470    3280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:22:46.475954    3280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:22:46.541507    3280 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:22:46.543514    3280 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:22:46.543514    3280 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:22:46.546507    3280 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:22:46.579923    3280 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:22:46.579960    3280 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:22:46.583488    3280 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:22:46.616033    3280 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:22:46.616033    3280 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:22:46.616033    3280 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:22:46.616033    3280 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:22:46.622294    3280 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:22:46.697223    3280 cni.go:84] Creating CNI manager for ""
	I1212 21:22:46.697314    3280 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:46.697346    3280 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:22:46.697377    3280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:22:46.697583    3280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:22:46.702382    3280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:22:46.715240    3280 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:22:46.719690    3280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:22:46.734245    3280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:22:46.752804    3280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:22:46.773638    3280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:22:46.799811    3280 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:22:46.806634    3280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:22:46.826613    3280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:22:46.975263    3280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:22:47.001570    3280 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:22:47.001622    3280 certs.go:195] generating shared ca certs ...
	I1212 21:22:47.001666    3280 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.002167    3280 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:22:47.002588    3280 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:22:47.002707    3280 certs.go:257] generating profile certs ...
	I1212 21:22:47.002789    3280 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:22:47.002789    3280 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.crt with IP's: []
	I1212 21:22:47.186831    3280 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.crt ...
	I1212 21:22:47.186831    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.crt: {Name:mk6154553b2a4a4af4bf46220d6cdec413dfff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.187820    3280 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key ...
	I1212 21:22:47.187820    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key: {Name:mke35752cc265d54c54dbc7253583e0f1b61cf15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.188830    3280 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:22:47.188830    3280 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt.67e5e88d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 21:22:47.223372    3280 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt.67e5e88d ...
	I1212 21:22:47.223372    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt.67e5e88d: {Name:mk9e07d628c5762e862fc27c9cfaf32aeafed090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.224674    3280 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d ...
	I1212 21:22:47.224674    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d: {Name:mk77ecc931f6bb54544fb20c7dee9076a1f22195 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.225420    3280 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt.67e5e88d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt
	I1212 21:22:47.242938    3280 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key
	I1212 21:22:47.243593    3280 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:22:47.244208    3280 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt with IP's: []
	I1212 21:22:47.365548    3280 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt ...
	I1212 21:22:47.365548    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt: {Name:mk7240113b3ab720d09b9b68ea2dabc6a5676a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.366970    3280 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key ...
	I1212 21:22:47.366970    3280 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key: {Name:mk69b2636aeee6b8998bc61f462dba90557a117f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:22:47.388099    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:22:47.388812    3280 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:22:47.388812    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:22:47.388812    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:22:47.389445    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:22:47.389445    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:22:47.390006    3280 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:22:47.391346    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:22:47.422919    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:22:47.457917    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:22:47.489086    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:22:47.518084    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:22:47.546090    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:22:47.576405    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:22:47.607381    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:22:47.641069    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:22:47.674137    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:22:47.701309    3280 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:22:47.727308    3280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:22:47.753894    3280 ssh_runner.go:195] Run: openssl version
	I1212 21:22:47.769508    3280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:22:47.786308    3280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:22:47.802451    3280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:22:47.809556    3280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:22:47.814998    3280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:22:47.863892    3280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:22:47.888082    3280 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 21:22:47.910277    3280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:22:47.932306    3280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:22:47.948210    3280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:22:47.955221    3280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:22:47.959215    3280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:22:48.005565    3280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:22:48.024624    3280 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13396.pem /etc/ssl/certs/51391683.0
	I1212 21:22:48.046378    3280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:22:48.067880    3280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:22:48.088457    3280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:22:48.098336    3280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:22:48.103289    3280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:22:48.153289    3280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:22:48.172996    3280 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/133962.pem /etc/ssl/certs/3ec20f2e.0
	I1212 21:22:48.189860    3280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:22:48.200552    3280 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:22:48.200822    3280 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:48.204971    3280 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:22:48.238582    3280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:22:48.256152    3280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:22:48.270020    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:22:48.275268    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:22:48.291279    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:22:48.291279    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:22:48.295894    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:22:48.311553    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:22:48.315529    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:22:48.335543    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:22:48.350240    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:22:48.354854    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:22:48.374619    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:22:48.389985    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:22:48.395326    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:22:48.418657    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:22:48.433016    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:22:48.439475    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:22:48.456853    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:22:48.568707    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:22:48.657361    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:22:48.782665    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:26:50.572138    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:26:50.572138    3280 kubeadm.go:319] 
	I1212 21:26:50.572138    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:26:50.576372    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:26:50.576562    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:26:50.576743    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:26:50.576743    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:26:50.577278    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:26:50.578180    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:26:50.578753    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:26:50.578857    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:26:50.579009    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:26:50.579109    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:26:50.579235    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:26:50.579500    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:26:50.579604    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:26:50.579832    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:26:50.579931    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] OS: Linux
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:26:50.580562    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:26:50.580709    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:26:50.580788    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:26:50.580931    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:26:50.581495    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:26:50.581626    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:26:50.585055    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:26:50.586227    3280 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586357    3280 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:26:50.587005    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:26:50.587734    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:26:50.587927    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:26:50.590646    3280 out.go:252]   - Booting up control plane ...
	I1212 21:26:50.591259    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:26:50.592415    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001153116s
	I1212 21:26:50.592415    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	W1212 21:26:50.593382    3280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:26:50.597384    3280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:26:51.058393    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:26:51.077528    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:26:51.081780    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:26:51.095285    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:26:51.095342    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:26:51.100877    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:26:51.114399    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:26:51.119274    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:26:51.137891    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:26:51.152853    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:26:51.157180    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:26:51.176783    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.190524    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:26:51.194597    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.212488    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:26:51.228065    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:26:51.232039    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:26:51.250057    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:26:51.372297    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:26:51.461499    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:26:51.553708    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:30:52.137302    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:30:52.138027    3280 kubeadm.go:319] 
	I1212 21:30:52.138843    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:30:52.141943    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:30:52.143509    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:30:52.143682    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:30:52.143737    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:30:52.146177    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:30:52.146242    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:30:52.146317    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:30:52.146393    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:30:52.146451    3280 kubeadm.go:319] OS: Linux
	I1212 21:30:52.146525    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:30:52.146600    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:30:52.146675    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:30:52.146751    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:30:52.146798    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:30:52.147438    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:30:52.147438    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:30:52.149720    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:30:52.150831    3280 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:30:52.151461    3280 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:30:52.151568    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:30:52.151653    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:30:52.152300    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:30:52.154451    3280 out.go:252]   - Booting up control plane ...
	I1212 21:30:52.154764    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:30:52.154956    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:30:52.155143    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:30:52.155412    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:30:52.155651    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:30:52.155876    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:30:52.156043    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226136s
	I1212 21:30:52.156043    3280 kubeadm.go:319] 
	I1212 21:30:52.156043    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:30:52.156043    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:30:52.156809    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:403] duration metric: took 8m3.9483682s to StartCluster
	I1212 21:30:52.156973    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:52.160832    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:52.223294    3280 cri.go:89] found id: ""
	I1212 21:30:52.223294    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.223294    3280 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:52.223294    3280 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:52.227810    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:52.274653    3280 cri.go:89] found id: ""
	I1212 21:30:52.274653    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.274653    3280 logs.go:284] No container was found matching "etcd"
	I1212 21:30:52.274653    3280 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:52.279047    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:52.320887    3280 cri.go:89] found id: ""
	I1212 21:30:52.320887    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.320887    3280 logs.go:284] No container was found matching "coredns"
	I1212 21:30:52.320887    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:52.323880    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:52.368122    3280 cri.go:89] found id: ""
	I1212 21:30:52.368122    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.368122    3280 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:52.368122    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:52.372480    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:52.416439    3280 cri.go:89] found id: ""
	I1212 21:30:52.416439    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.416439    3280 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:52.416439    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:52.420746    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:52.464733    3280 cri.go:89] found id: ""
	I1212 21:30:52.464800    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.464800    3280 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:52.464800    3280 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:52.469057    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:52.512080    3280 cri.go:89] found id: ""
	I1212 21:30:52.512158    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.512158    3280 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:52.512158    3280 logs.go:123] Gathering logs for Docker ...
	I1212 21:30:52.512158    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:30:52.543781    3280 logs.go:123] Gathering logs for container status ...
	I1212 21:30:52.543781    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:52.588290    3280 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:52.588290    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:52.653033    3280 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:52.653033    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:52.693931    3280 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:52.693931    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:52.781976    3280 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:30:52.781976    3280 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:30:52.781976    3280 out.go:285] * 
	* 
	W1212 21:30:52.781976    3280 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.783438    3280 out.go:285] * 
	* 
	W1212 21:30:52.785599    3280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:30:52.791153    3280 out.go:203] 
	W1212 21:30:52.795058    3280 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.795120    3280 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:30:52.795120    3280 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:30:52.797749    3280 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-449900
helpers_test.go:244: (dbg) docker inspect newest-cni-449900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a",
	        "Created": "2025-12-12T21:22:35.195234972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:22:35.488144172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hosts",
	        "LogPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a-json.log",
	        "Name": "/newest-cni-449900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-449900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-449900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-449900",
	                "Source": "/var/lib/docker/volumes/newest-cni-449900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-449900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-449900",
	                "name.minikube.sigs.k8s.io": "newest-cni-449900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fde89981b6eb4ca746a1211ab1fbe1f31940a2b31e5100a41e3540a20fc35851",
	            "SandboxKey": "/var/run/docker/netns/fde89981b6eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62608"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62609"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62610"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-449900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bcedcac448e9e1d98fcddd7097fe310c50b6a637d5f23ebf519e961f822823ab",
	                    "EndpointID": "7f3443bddde4dd45dcc425732d5708cf2a5e19f01ca0bcdde4511a4d59f9587d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-449900",
	                        "8fae8198a0e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 6 (586.8348ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:30:53.721859   10784 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25: (1.1013322s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:30:11
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:30:11.311431   13804 out.go:360] Setting OutFile to fd 2028 ...
	I1212 21:30:11.366494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.367494   13804 out.go:374] Setting ErrFile to fd 840...
	I1212 21:30:11.367494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.380496   13804 out.go:368] Setting JSON to false
	I1212 21:30:11.382494   13804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9149,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:30:11.382494   13804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:30:11.386494   13804 out.go:179] * [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:30:11.389494   13804 notify.go:221] Checking for updates...
	I1212 21:30:11.390508   13804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:11.393495   13804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:30:11.395506   13804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:30:11.398496   13804 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:30:11.400504   13804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:30:11.403497   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:11.405494   13804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:30:11.518260   13804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:30:11.522047   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:11.753278   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:11.731465297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:11.756457   13804 out.go:179] * Using the docker driver based on existing profile
	I1212 21:30:11.760219   13804 start.go:309] selected driver: docker
	I1212 21:30:11.760257   13804 start.go:927] validating driver "docker" against &{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:11.760327   13804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:30:11.846740   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:12.077144   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:12.058111571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:12.077698   13804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:30:12.077698   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:12.077698   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:12.077698   13804 start.go:353] cluster config:
	{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:12.080814   13804 out.go:179] * Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	I1212 21:30:12.083912   13804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:30:12.086321   13804 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:30:12.089654   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:12.089654   13804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:30:12.089654   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:30:12.353137   13804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:30:12.353137   13804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:30:12.353137   13804 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:30:12.353137   13804 start.go:360] acquireMachinesLock for no-preload-285600: {Name:mk2731f875a3a62f76017c58cc7d43a1bb1f8ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:12.353137   13804 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-285600"
	I1212 21:30:12.353137   13804 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:30:12.353684   13804 fix.go:54] fixHost starting: 
	I1212 21:30:12.365514   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:12.437166   13804 fix.go:112] recreateIfNeeded on no-preload-285600: state=Stopped err=<nil>
	W1212 21:30:12.437166   13804 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:30:12.443159   13804 out.go:252] * Restarting existing docker container for "no-preload-285600" ...
	I1212 21:30:12.448159   13804 cli_runner.go:164] Run: docker start no-preload-285600
	I1212 21:30:13.953419   13804 cli_runner.go:217] Completed: docker start no-preload-285600: (1.5052355s)
	I1212 21:30:13.960859   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:14.031860   13804 kic.go:430] container "no-preload-285600" state is running.
	I1212 21:30:14.039849   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:14.112858   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:14.114845   13804 machine.go:94] provisionDockerMachine start ...
	I1212 21:30:14.119854   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:14.192854   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:14.193857   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:14.193857   13804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:30:14.195874   13804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:30:14.957274   13804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.957533   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1212 21:30:14.957533   13804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.866838s
	I1212 21:30:14.957533   13804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1212 21:30:14.963183   13804 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.963323   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1212 21:30:14.963323   13804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8726277s
	I1212 21:30:14.963323   13804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.8736432s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.8746379s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 21:30:14.995149   13804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.995149   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1212 21:30:14.995149   13804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9054481s
	I1212 21:30:14.995149   13804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1212 21:30:15.001398   13804 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.001398   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1212 21:30:15.001398   13804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9116969s
	I1212 21:30:15.001398   13804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 21:30:15.006281   13804 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.006281   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1212 21:30:15.006978   13804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9162031s
	I1212 21:30:15.006978   13804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.039446   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1212 21:30:15.039446   13804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9497439s
	I1212 21:30:15.039446   13804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:87] Successfully saved all images to host disk.
	I1212 21:30:17.371371   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.371371   13804 ubuntu.go:182] provisioning hostname "no-preload-285600"
	I1212 21:30:17.374694   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.431417   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.431417   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.431417   13804 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-285600 && echo "no-preload-285600" | sudo tee /etc/hostname
	I1212 21:30:17.615567   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.620003   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.675055   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.675719   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.675719   13804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:30:17.863046   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:17.863046   13804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:30:17.863046   13804 ubuntu.go:190] setting up certificates
	I1212 21:30:17.863579   13804 provision.go:84] configureAuth start
	I1212 21:30:17.867203   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:17.921910   13804 provision.go:143] copyHostCerts
	I1212 21:30:17.921910   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:30:17.921910   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:30:17.922850   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:30:17.923414   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:30:17.923414   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:30:17.923977   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:30:17.924758   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:30:17.924758   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:30:17.924916   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:30:17.925647   13804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-285600 san=[127.0.0.1 192.168.121.2 localhost minikube no-preload-285600]
	I1212 21:30:17.969098   13804 provision.go:177] copyRemoteCerts
	I1212 21:30:17.972961   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:30:17.975732   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.033900   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:18.156529   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:30:18.190271   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:30:18.219028   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:30:18.247371   13804 provision.go:87] duration metric: took 383.7852ms to configureAuth
	I1212 21:30:18.247371   13804 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:30:18.248196   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:18.253065   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.307356   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.308437   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.308437   13804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:30:18.484387   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:30:18.484387   13804 ubuntu.go:71] root file system type: overlay
	I1212 21:30:18.484387   13804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:30:18.488431   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.543927   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.544057   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.544057   13804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:30:18.725295   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:30:18.729293   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.786383   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.787353   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.787415   13804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:30:18.969169   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:18.969169   13804 machine.go:97] duration metric: took 4.8542454s to provisionDockerMachine
	I1212 21:30:18.969169   13804 start.go:293] postStartSetup for "no-preload-285600" (driver="docker")
	I1212 21:30:18.969169   13804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:30:18.973559   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:30:18.977516   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.030405   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.165106   13804 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:30:19.173383   13804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:30:19.173383   13804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:30:19.174601   13804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:30:19.179034   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:30:19.191703   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:30:19.218878   13804 start.go:296] duration metric: took 249.7055ms for postStartSetup
	I1212 21:30:19.224011   13804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:30:19.227131   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.279470   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.406985   13804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:30:19.415865   13804 fix.go:56] duration metric: took 7.062067s for fixHost
	I1212 21:30:19.415865   13804 start.go:83] releasing machines lock for "no-preload-285600", held for 7.0626137s
	I1212 21:30:19.419613   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:19.476904   13804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:30:19.481453   13804 ssh_runner.go:195] Run: cat /version.json
	I1212 21:30:19.481484   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.483912   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.536799   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.547561   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	W1212 21:30:19.661665   13804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:30:19.667210   13804 ssh_runner.go:195] Run: systemctl --version
	I1212 21:30:19.682255   13804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:30:19.691854   13804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:30:19.696344   13804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:30:19.710554   13804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:30:19.710554   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:19.710554   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:19.710554   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:19.738854   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:30:19.758305   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1212 21:30:19.763550   13804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:30:19.763550   13804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:30:19.778518   13804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:30:19.782511   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:30:19.803423   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.823199   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:30:19.842875   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.861015   13804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:30:19.878016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:30:19.896016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:30:19.917384   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:30:19.937797   13804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:30:19.955074   13804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:30:19.974670   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:20.125841   13804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:30:20.307940   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:20.307940   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:20.312305   13804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:30:20.338880   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.361799   13804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:30:20.425840   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.448078   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:30:20.466273   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:20.493401   13804 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:30:20.505640   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:30:20.517978   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:30:20.546077   13804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:30:20.685945   13804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:30:20.820797   13804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:30:20.820797   13804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:30:20.846868   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:30:20.870150   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:21.006241   13804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:30:21.847456   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:30:21.870131   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:30:21.892265   13804 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:30:21.918146   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:21.940975   13804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:30:22.091526   13804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:30:22.237813   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.375430   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:30:22.400803   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:30:22.424619   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.577023   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:30:22.684499   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:22.703199   13804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:30:22.707457   13804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:30:22.717003   13804 start.go:564] Will wait 60s for crictl version
	I1212 21:30:22.722114   13804 ssh_runner.go:195] Run: which crictl
	I1212 21:30:22.736201   13804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:30:22.783830   13804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:30:22.787385   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.831267   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.876285   13804 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:30:22.880058   13804 cli_runner.go:164] Run: docker exec -t no-preload-285600 dig +short host.docker.internal
	I1212 21:30:23.014334   13804 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:30:23.019335   13804 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:30:23.026955   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.046973   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:23.103000   13804 kubeadm.go:884] updating cluster {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:30:23.103289   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:23.108430   13804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:30:23.145267   13804 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:30:23.145267   13804 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:30:23.145267   13804 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:30:23.145794   13804 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:30:23.149307   13804 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:30:23.218275   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:23.218275   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:23.218275   13804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:30:23.218275   13804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-285600 NodeName:no-preload-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:30:23.218275   13804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-285600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:30:23.224071   13804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:30:23.236229   13804 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:30:23.240995   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:30:23.253852   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1212 21:30:23.272662   13804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:30:23.293961   13804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 21:30:23.318313   13804 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:30:23.325082   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.346396   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:23.486209   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:23.509994   13804 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600 for IP: 192.168.121.2
	I1212 21:30:23.509994   13804 certs.go:195] generating shared ca certs ...
	I1212 21:30:23.509994   13804 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:30:23.510778   13804 certs.go:257] generating profile certs ...
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6
	I1212 21:30:23.512294   13804 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key
	I1212 21:30:23.513282   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:30:23.513306   13804 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:30:23.513825   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:30:23.516015   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:30:23.543721   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:30:23.570887   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:30:23.599906   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:30:23.628308   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:30:23.655194   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:30:23.680557   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:30:23.709445   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:30:23.735490   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:30:23.763952   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:30:23.788819   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:30:23.817493   13804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:30:23.843244   13804 ssh_runner.go:195] Run: openssl version
	I1212 21:30:23.857029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.875085   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:30:23.894989   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.903335   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.907817   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.954829   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:30:23.973758   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:30:23.992281   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:30:24.012825   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.021794   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.027262   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.076227   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:30:24.097029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.114364   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:30:24.131237   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.139762   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.144290   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.195500   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:30:24.213100   13804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:30:24.224086   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:30:24.274630   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:30:24.322795   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:30:24.371721   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:30:24.422510   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:30:24.475266   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:30:24.519671   13804 kubeadm.go:401] StartCluster: {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:24.524264   13804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:30:24.559622   13804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:30:24.571455   13804 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:30:24.571455   13804 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:30:24.576936   13804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:30:24.591763   13804 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:30:24.596129   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.651902   13804 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.652253   13804 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-285600" cluster setting kubeconfig missing "no-preload-285600" context setting]
	I1212 21:30:24.652697   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.674806   13804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:30:24.692277   13804 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:30:24.692277   13804 kubeadm.go:602] duration metric: took 120.82ms to restartPrimaryControlPlane
	I1212 21:30:24.692277   13804 kubeadm.go:403] duration metric: took 172.6933ms to StartCluster
	I1212 21:30:24.692277   13804 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.692277   13804 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.693507   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.694169   13804 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:30:24.694169   13804 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:30:24.694746   13804 addons.go:70] Setting storage-provisioner=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:70] Setting dashboard=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon storage-provisioner=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:24.694746   13804 addons.go:70] Setting default-storageclass=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon dashboard=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-285600"
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	W1212 21:30:24.694746   13804 addons.go:248] addon dashboard should already be in state true
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.698139   13804 out.go:179] * Verifying Kubernetes components...
	I1212 21:30:24.704555   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.705748   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:24.762431   13804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:30:24.762431   13804 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:30:24.764424   13804 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.764424   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:30:24.767454   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.767454   13804 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:30:24.769433   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:30:24.769433   13804 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:30:24.773442   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.780427   13804 addons.go:239] Setting addon default-storageclass=true in "no-preload-285600"
	I1212 21:30:24.780427   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.787430   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.820427   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.826439   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.837426   13804 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:24.837426   13804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:30:24.840425   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.872429   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:24.893413   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.963677   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:30:24.963677   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:30:24.967679   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.982575   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:30:24.982575   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:30:25.004580   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:30:25.004580   13804 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:30:25.025729   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:30:25.025729   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:30:25.051800   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:25.053624   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:25.061392   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:30:25.061392   13804 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:30:25.072688   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.072688   13804 retry.go:31] will retry after 158.823977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.110005   13804 node_ready.go:35] waiting up to 6m0s for node "no-preload-285600" to be "Ready" ...
	I1212 21:30:25.146675   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:30:25.146675   13804 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:30:25.168917   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:30:25.168917   13804 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:30:25.190262   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:30:25.190262   13804 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 21:30:25.237134   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.255181   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.255181   13804 retry.go:31] will retry after 222.613203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.258203   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:25.258203   13804 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:30:25.281581   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.360910   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.360910   13804 retry.go:31] will retry after 528.174411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:25.396771   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.396771   13804 retry.go:31] will retry after 334.337457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.483899   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:25.562673   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.562738   13804 retry.go:31] will retry after 526.924446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.736852   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.814449   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.814449   13804 retry.go:31] will retry after 242.822318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.895040   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.976722   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.976722   13804 retry.go:31] will retry after 649.835265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.062555   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:26.094920   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:26.173577   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.173577   13804 retry.go:31] will retry after 303.723342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:26.206503   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.206503   13804 retry.go:31] will retry after 711.474393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.482577   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:26.584453   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.584453   13804 retry.go:31] will retry after 1.214394493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.632132   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:26.707550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.707577   13804 retry.go:31] will retry after 679.917817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.923400   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:27.004405   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.004405   13804 retry.go:31] will retry after 921.431314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.393372   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:27.464948   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.464948   13804 retry.go:31] will retry after 1.86941024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.806617   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:27.880154   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.880250   13804 retry.go:31] will retry after 870.607292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.930624   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:28.010568   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.010568   13804 retry.go:31] will retry after 1.688030068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.756973   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:28.854322   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.854322   13804 retry.go:31] will retry after 1.72717743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.339399   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:29.418550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.418550   13804 retry.go:31] will retry after 2.160026616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.704224   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:29.784607   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.784607   13804 retry.go:31] will retry after 1.396897779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.585867   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:30.664243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.664314   13804 retry.go:31] will retry after 3.060722722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.188925   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:31.270881   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.270881   13804 retry.go:31] will retry after 3.544218054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.584146   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:31.661710   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.661710   13804 retry.go:31] will retry after 3.805789738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.730718   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:33.815337   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.815337   13804 retry.go:31] will retry after 4.430320375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.819397   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:34.899243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.899243   13804 retry.go:31] will retry after 6.309363077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:35.143657   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:35.473027   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:35.571773   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:35.571773   13804 retry.go:31] will retry after 2.80996556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.250480   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:38.332990   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.332990   13804 retry.go:31] will retry after 8.351867848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.387198   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:38.470982   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.470982   13804 retry.go:31] will retry after 8.954426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.214251   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:41.296230   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.296230   13804 retry.go:31] will retry after 7.46364933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:45.188063   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:46.689378   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:46.780060   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:46.780173   13804 retry.go:31] will retry after 7.773373788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.432175   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:47.509090   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.509090   13804 retry.go:31] will retry after 12.066548893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.765276   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:48.850081   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.850081   13804 retry.go:31] will retry after 11.297010825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:52.137302    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:30:52.138027    3280 kubeadm.go:319] 
	I1212 21:30:52.138843    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:30:52.141943    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:30:52.143509    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:30:52.143682    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:30:52.143737    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:30:52.146177    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:30:52.146242    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:30:52.146317    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:30:52.146393    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:30:52.146451    3280 kubeadm.go:319] OS: Linux
	I1212 21:30:52.146525    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:30:52.146600    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:30:52.146675    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:30:52.146751    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:30:52.146798    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:30:52.147438    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:30:52.147438    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:30:52.149720    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:30:52.150831    3280 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:30:52.151461    3280 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:30:52.151568    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:30:52.151653    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:30:52.152300    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:30:52.154451    3280 out.go:252]   - Booting up control plane ...
	I1212 21:30:52.154764    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:30:52.154956    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:30:52.155143    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:30:52.155412    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:30:52.155651    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:30:52.155876    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:30:52.156043    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226136s
	I1212 21:30:52.156043    3280 kubeadm.go:319] 
	I1212 21:30:52.156043    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:30:52.156043    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:30:52.156809    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:403] duration metric: took 8m3.9483682s to StartCluster
	I1212 21:30:52.156973    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:52.160832    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:52.223294    3280 cri.go:89] found id: ""
	I1212 21:30:52.223294    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.223294    3280 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:52.223294    3280 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:52.227810    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:52.274653    3280 cri.go:89] found id: ""
	I1212 21:30:52.274653    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.274653    3280 logs.go:284] No container was found matching "etcd"
	I1212 21:30:52.274653    3280 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:52.279047    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:52.320887    3280 cri.go:89] found id: ""
	I1212 21:30:52.320887    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.320887    3280 logs.go:284] No container was found matching "coredns"
	I1212 21:30:52.320887    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:52.323880    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:52.368122    3280 cri.go:89] found id: ""
	I1212 21:30:52.368122    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.368122    3280 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:52.368122    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:52.372480    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:52.416439    3280 cri.go:89] found id: ""
	I1212 21:30:52.416439    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.416439    3280 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:52.416439    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:52.420746    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:52.464733    3280 cri.go:89] found id: ""
	I1212 21:30:52.464800    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.464800    3280 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:52.464800    3280 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:52.469057    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:52.512080    3280 cri.go:89] found id: ""
	I1212 21:30:52.512158    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.512158    3280 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:52.512158    3280 logs.go:123] Gathering logs for Docker ...
	I1212 21:30:52.512158    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:30:52.543781    3280 logs.go:123] Gathering logs for container status ...
	I1212 21:30:52.543781    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:52.588290    3280 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:52.588290    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:52.653033    3280 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:52.653033    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:52.693931    3280 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:52.693931    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:52.781976    3280 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:30:52.781976    3280 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:30:52.781976    3280 out.go:285] * 
	W1212 21:30:52.781976    3280 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.783438    3280 out.go:285] * 
	W1212 21:30:52.785599    3280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:30:52.791153    3280 out.go:203] 
	W1212 21:30:52.795058    3280 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.795120    3280 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:30:52.795120    3280 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:30:52.797749    3280 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.897992617Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898182835Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898196437Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898201637Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898208938Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898237241Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898288445Z" level=info msg="Initializing buildkit"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.027186712Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035180467Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035400987Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035429690Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035467194Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:30:54.748422   10468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:54.749323   10468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:54.752647   10468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:54.754222   10468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:54.755366   10468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.497307] CPU: 8 PID: 454817 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f913e6c4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f913e6c4af6.
	[  +0.000001] RSP: 002b:00007ffd0c4e19c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000034] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.811337] CPU: 0 PID: 454944 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe620208b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fe620208af6.
	[  +0.000001] RSP: 002b:00007ffc944a0d80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:30:54 up  2:32,  0 user,  load average: 0.70, 1.78, 3.19
	Linux newest-cni-449900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:30:51 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:52 newest-cni-449900 kubelet[10195]: E1212 21:30:52.106366   10195 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:52 newest-cni-449900 kubelet[10322]: E1212 21:30:52.874597   10322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:52 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:53 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 21:30:53 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:53 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:53 newest-cni-449900 kubelet[10337]: E1212 21:30:53.617034   10337 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:53 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:53 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:54 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 21:30:54 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:54 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:54 newest-cni-449900 kubelet[10370]: E1212 21:30:54.361696   10370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:54 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:54 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 6 (590.0932ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:30:55.738317    4372 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-449900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (518.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (5.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-285600 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-285600 create -f testdata\busybox.yaml: exit status 1 (100.8802ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-285600" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-285600 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:19:18.519800705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5af3f413668a0d538b65d8f61bdb8f76c9d3fffc039f5c39eab88c8e538214f8",
	            "SandboxKey": "/var/run/docker/netns/5af3f413668a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "41d46d4540a8534435610e3455fd03f86fe030069ea47ea0bc7248badc5ae81c",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 6 (586.6915ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:04.302098   11068 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.1458875s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ stop    │ -p embed-certs-729900 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-729900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ start   │ -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:22:58
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:22:58.216335   14160 out.go:360] Setting OutFile to fd 1132 ...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.266331   14160 out.go:374] Setting ErrFile to fd 1508...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.280322   14160 out.go:368] Setting JSON to false
	I1212 21:22:58.283341   14160 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8716,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:22:58.283341   14160 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:22:58.287338   14160 out.go:179] * [default-k8s-diff-port-124600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:22:58.290341   14160 notify.go:221] Checking for updates...
	I1212 21:22:58.292332   14160 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:22:58.294328   14160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:22:58.296340   14160 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:22:58.298340   14160 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:22:58.301322   14160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:22:58.304323   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:58.305325   14160 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:22:58.434944   14160 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:22:58.438949   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.676253   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:58.655092827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.680239   14160 out.go:179] * Using the docker driver based on existing profile
	I1212 21:22:58.682239   14160 start.go:309] selected driver: docker
	I1212 21:22:58.682239   14160 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.682239   14160 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:22:58.732240   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.965241   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:100 SystemTime:2025-12-12 21:22:58.948719453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.966243   14160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:22:58.966243   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:22:58.966243   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:58.966243   14160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.968243   14160 out.go:179] * Starting "default-k8s-diff-port-124600" primary control-plane node in "default-k8s-diff-port-124600" cluster
	I1212 21:22:58.972244   14160 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:22:58.974236   14160 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:22:58.977243   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:22:58.977243   14160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:22:58.977243   14160 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1212 21:22:58.977243   14160 cache.go:65] Caching tarball of preloaded images
	I1212 21:22:58.977243   14160 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:22:58.978245   14160 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1212 21:22:58.978245   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.059257   14160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:22:59.059257   14160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:22:59.059257   14160 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:22:59.059257   14160 start.go:360] acquireMachinesLock for default-k8s-diff-port-124600: {Name:mk780a32308b64368d3930722f9e881df08c3504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:22:59.059257   14160 start.go:364] duration metric: took 0s to acquireMachinesLock for "default-k8s-diff-port-124600"
	I1212 21:22:59.059257   14160 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:22:59.059257   14160 fix.go:54] fixHost starting: 
	I1212 21:22:59.066252   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.129461   14160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124600: state=Stopped err=<nil>
	W1212 21:22:59.129461   14160 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:22:59.133088   14160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124600" ...
	I1212 21:22:59.136686   14160 cli_runner.go:164] Run: docker start default-k8s-diff-port-124600
	I1212 21:22:59.862889   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.919149   14160 kic.go:430] container "default-k8s-diff-port-124600" state is running.
	I1212 21:22:59.924156   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:22:59.977149   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.979157   14160 machine.go:94] provisionDockerMachine start ...
	I1212 21:22:59.982162   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:00.038158   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:00.038158   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:00.038158   14160 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:23:00.040164   14160 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:23:03.234044   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.234044   14160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124600"
	I1212 21:23:03.237963   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.294306   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.294306   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.294306   14160 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124600 && echo "default-k8s-diff-port-124600" | sudo tee /etc/hostname
	I1212 21:23:03.491471   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.495244   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.552274   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.552715   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.552715   14160 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124600/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:23:03.726759   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:03.726759   14160 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:23:03.726759   14160 ubuntu.go:190] setting up certificates
	I1212 21:23:03.726759   14160 provision.go:84] configureAuth start
	I1212 21:23:03.730596   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:03.786827   14160 provision.go:143] copyHostCerts
	I1212 21:23:03.787473   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:23:03.787473   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:23:03.787473   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:23:03.788324   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:23:03.788324   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:23:03.788845   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:23:03.789576   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:23:03.789576   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:23:03.789576   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:23:03.790404   14160 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.default-k8s-diff-port-124600 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-124600 localhost minikube]
	I1212 21:23:04.028472   14160 provision.go:177] copyRemoteCerts
	I1212 21:23:04.032783   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:23:04.035720   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.090685   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:04.220108   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:23:04.251841   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 21:23:04.283040   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:23:04.313548   14160 provision.go:87] duration metric: took 586.7803ms to configureAuth
	I1212 21:23:04.313548   14160 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:23:04.313548   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:04.319686   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.374458   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.375110   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.375110   14160 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:23:04.546890   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:23:04.546890   14160 ubuntu.go:71] root file system type: overlay
	I1212 21:23:04.546890   14160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:23:04.551279   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.607300   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.607818   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.607929   14160 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:23:04.799190   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:23:04.802868   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.862025   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.862025   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.862025   14160 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:23:05.043356   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:05.043406   14160 machine.go:97] duration metric: took 5.0641684s to provisionDockerMachine
	I1212 21:23:05.043449   14160 start.go:293] postStartSetup for "default-k8s-diff-port-124600" (driver="docker")
	I1212 21:23:05.043449   14160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:23:05.047805   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:23:05.051418   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.110898   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.255814   14160 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:23:05.264052   14160 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:23:05.264052   14160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:23:05.264052   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:23:05.264766   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:23:05.265608   14160 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:23:05.270881   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:23:05.288263   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:23:05.316332   14160 start.go:296] duration metric: took 272.8783ms for postStartSetup
	I1212 21:23:05.320908   14160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:23:05.324174   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.375311   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.511900   14160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:23:05.522084   14160 fix.go:56] duration metric: took 6.4622006s for fixHost
	I1212 21:23:05.522084   14160 start.go:83] releasing machines lock for "default-k8s-diff-port-124600", held for 6.4627242s
	I1212 21:23:05.525524   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:05.580943   14160 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:23:05.584825   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.585512   14160 ssh_runner.go:195] Run: cat /version.json
	I1212 21:23:05.589557   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.645453   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.647465   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	W1212 21:23:05.764866   14160 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:23:05.777208   14160 ssh_runner.go:195] Run: systemctl --version
	I1212 21:23:05.795091   14160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:23:05.805053   14160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:23:05.808995   14160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:23:05.822377   14160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:23:05.822377   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:05.822377   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:05.822377   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:05.850571   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:23:05.860918   14160 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:23:05.860962   14160 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:23:05.870950   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:23:05.886032   14160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:23:05.890300   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:23:05.911690   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.931881   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:23:05.951355   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.972217   14160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:23:05.989654   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:23:06.008555   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:23:06.029580   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:23:06.051557   14160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:23:06.068272   14160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:23:06.088555   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:06.232851   14160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:23:06.395580   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:06.396135   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:06.401664   14160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:23:06.427774   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.449987   14160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:23:06.530054   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.552557   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:23:06.573212   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:06.601206   14160 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:23:06.613316   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:23:06.629256   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:23:06.655736   14160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:23:06.808191   14160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:23:06.948697   14160 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:23:06.949225   14160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:23:06.973857   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:23:06.995178   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:07.159801   14160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:23:08.387280   14160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2274602s)
	I1212 21:23:08.392059   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:23:08.414696   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:23:08.439024   14160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:23:08.465914   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:08.488326   14160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:23:08.636890   14160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:23:08.775314   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:08.926196   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:23:08.950709   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:23:08.974437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:09.109676   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:23:09.227758   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:09.246593   14160 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:23:09.251694   14160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:23:09.259250   14160 start.go:564] Will wait 60s for crictl version
	I1212 21:23:09.263473   14160 ssh_runner.go:195] Run: which crictl
	I1212 21:23:09.274454   14160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:23:09.319908   14160 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:23:09.323619   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.371068   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.415300   14160 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1212 21:23:09.420229   14160 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-124600 dig +short host.docker.internal
	I1212 21:23:09.561538   14160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:23:09.566410   14160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:23:09.573305   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.594186   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:09.649016   14160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:23:09.649995   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:23:09.652859   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.686348   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.686348   14160 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:23:09.689834   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.722637   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.722717   14160 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:23:09.722717   14160 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 docker true true} ...
	I1212 21:23:09.722968   14160 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-124600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:23:09.726467   14160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:23:09.804166   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:23:09.804166   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:23:09.804166   14160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:23:09.804166   14160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124600 NodeName:default-k8s-diff-port-124600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:23:09.804776   14160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-124600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:23:09.809184   14160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:23:09.822880   14160 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:23:09.827517   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:23:09.843159   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1212 21:23:09.865173   14160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:23:09.883664   14160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 21:23:09.910110   14160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:23:09.917548   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.936437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:10.076798   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:10.099969   14160 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600 for IP: 192.168.76.2
	I1212 21:23:10.099969   14160 certs.go:195] generating shared ca certs ...
	I1212 21:23:10.099969   14160 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:23:10.100633   14160 certs.go:257] generating profile certs ...
	I1212 21:23:10.101754   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\client.key
	I1212 21:23:10.102187   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key.c1ba716d
	I1212 21:23:10.102537   14160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:23:10.103938   14160 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:23:10.104497   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:23:10.104785   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:23:10.105145   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:23:10.105904   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:23:10.107597   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:23:10.138041   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:23:10.169285   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:23:10.199761   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:23:10.228706   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:23:10.259268   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:23:10.319083   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:23:10.408082   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:23:10.504827   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:23:10.535027   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:23:10.606848   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:23:10.641191   14160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
E1212 21:28:05.621825   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	I1212 21:23:10.699040   14160 ssh_runner.go:195] Run: openssl version
	I1212 21:23:10.713021   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.729389   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:23:10.746196   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.754411   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.759187   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.807227   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:23:10.824046   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.841672   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:23:10.866000   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.875373   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.880699   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.937889   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:23:10.955118   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:23:10.975153   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:23:10.995392   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.003494   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.008922   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.057570   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:23:11.076453   14160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:23:11.089632   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:23:11.142247   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:23:11.218728   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:23:11.416273   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:23:11.544319   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:23:11.636634   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:23:11.685985   14160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:23:11.690036   14160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:23:11.724509   14160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:23:11.737581   14160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:23:11.737638   14160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:23:11.743506   14160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:23:11.757047   14160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:23:11.761811   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.815778   14160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.816493   14160 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124600" cluster setting kubeconfig missing "default-k8s-diff-port-124600" context setting]
	I1212 21:23:11.816493   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.838352   14160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:23:11.855027   14160 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:23:11.855027   14160 kubeadm.go:602] duration metric: took 117.3468ms to restartPrimaryControlPlane
	I1212 21:23:11.855027   14160 kubeadm.go:403] duration metric: took 169.0394ms to StartCluster
	I1212 21:23:11.855027   14160 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.855027   14160 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.856184   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.856963   14160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:23:11.856963   14160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:23:11.856963   14160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:11.856963   14160 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124600"
	W1212 21:23:11.857487   14160 addons.go:248] addon metrics-server should already be in state true
	I1212 21:23:11.857567   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.857598   14160 addons.go:248] addon storage-provisioner should already be in state true
	W1212 21:23:11.857598   14160 addons.go:248] addon dashboard should already be in state true
	I1212 21:23:11.857767   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.857819   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.863101   14160 out.go:179] * Verifying Kubernetes components...
	I1212 21:23:11.866976   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.868177   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870310   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870461   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.871764   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:11.932064   14160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:23:11.934081   14160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:23:11.942073   14160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:11.942073   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:23:11.944073   14160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:23:11.945075   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.947064   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:23:11.947064   14160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:23:11.951064   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.953072   14160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.953072   14160 addons.go:248] addon default-storageclass should already be in state true
	I1212 21:23:11.953072   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.962072   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.977069   14160 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:23:11.983067   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:23:11.983067   14160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:23:11.988070   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.005074   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.009067   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.019066   14160 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.019066   14160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:23:12.022067   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.046070   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.073066   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.092898   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:12.116354   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.165367   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:23:12.165367   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:23:12.167359   14160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:12.169365   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:12.186351   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:23:12.186351   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:23:12.204423   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:23:12.204423   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:23:12.207045   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:23:12.207045   14160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:23:12.230521   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:23:12.230521   14160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:23:12.231517   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:23:12.231517   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:23:12.233521   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1212 21:23:12.386842   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.386922   14160 retry.go:31] will retry after 278.156141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.390854   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:23:12.390854   14160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:23:12.400109   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.415390   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:23:12.415480   14160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:23:12.491717   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:23:12.491717   14160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:23:12.492530   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.492530   14160 retry.go:31] will retry after 256.197463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.512893   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:12.512893   14160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:23:12.538803   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:12.551683   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.551683   14160 retry.go:31] will retry after 265.384209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:12.644080   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.644080   14160 retry.go:31] will retry after 354.535598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.669419   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:12.752922   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.752922   14160 retry.go:31] will retry after 290.803282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.753921   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.823384   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1212 21:23:12.917382   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.917460   14160 retry.go:31] will retry after 300.691587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.004960   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:13.048937   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:13.093941   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.094016   14160 retry.go:31] will retry after 506.158576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.223508   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387360   14160 retry.go:31] will retry after 272.283438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387397   14160 retry.go:31] will retry after 368.00551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.607806   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:13.665164   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:13.697618   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.698562   14160 retry.go:31] will retry after 669.122462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.760538   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:14.372987   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:17.195846   14160 node_ready.go:49] node "default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:17.195955   14160 node_ready.go:38] duration metric: took 5.028515s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:17.195955   14160 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:23:17.200813   14160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:23:20.596132   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.9882139s)
	I1212 21:23:20.596672   14160 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.3422701s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.2468976s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.8066778s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.634458s)
	I1212 21:23:21.007551   14160 api_server.go:72] duration metric: took 9.150442s to wait for apiserver process to appear ...
	I1212 21:23:21.007551   14160 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:23:21.007551   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.010167   14160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124600 addons enable metrics-server
	
	I1212 21:23:21.099582   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.100442   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:21.196982   14160 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1212 21:23:21.200574   14160 addons.go:530] duration metric: took 9.3434618s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1212 21:23:21.508465   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.591838   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.591838   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.008494   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.019209   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:22.019209   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.507999   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.600220   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 200:
	ok
	I1212 21:23:22.604091   14160 api_server.go:141] control plane version: v1.34.2
	I1212 21:23:22.604864   14160 api_server.go:131] duration metric: took 1.5972868s to wait for apiserver health ...
	I1212 21:23:22.604864   14160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:23:22.612203   14160 system_pods.go:59] 8 kube-system pods found
	I1212 21:23:22.612251   14160 system_pods.go:61] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.612251   14160 system_pods.go:61] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.612251   14160 system_pods.go:61] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.612251   14160 system_pods.go:61] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.612251   14160 system_pods.go:74] duration metric: took 7.3871ms to wait for pod list to return data ...
	I1212 21:23:22.612251   14160 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:23:22.616756   14160 default_sa.go:45] found service account: "default"
	I1212 21:23:22.616756   14160 default_sa.go:55] duration metric: took 4.5056ms for default service account to be created ...
	I1212 21:23:22.616756   14160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:23:22.695042   14160 system_pods.go:86] 8 kube-system pods found
	I1212 21:23:22.695042   14160 system_pods.go:89] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.695105   14160 system_pods.go:89] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.695105   14160 system_pods.go:89] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.695168   14160 system_pods.go:89] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.695168   14160 system_pods.go:126] duration metric: took 78.4107ms to wait for k8s-apps to be running ...
	I1212 21:23:22.695198   14160 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:23:22.700468   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:22.725193   14160 system_svc.go:56] duration metric: took 29.0191ms WaitForService to wait for kubelet
	I1212 21:23:22.725193   14160 kubeadm.go:587] duration metric: took 10.868056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:23:22.725193   14160 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:23:22.732161   14160 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1212 21:23:22.732201   14160 node_conditions.go:123] node cpu capacity is 16
	I1212 21:23:22.732201   14160 node_conditions.go:105] duration metric: took 7.0085ms to run NodePressure ...
	I1212 21:23:22.732201   14160 start.go:242] waiting for startup goroutines ...
	I1212 21:23:22.732201   14160 start.go:247] waiting for cluster config update ...
	I1212 21:23:22.732201   14160 start.go:256] writing updated cluster config ...
	I1212 21:23:22.737899   14160 ssh_runner.go:195] Run: rm -f paused
	I1212 21:23:22.745044   14160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:22.751178   14160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:23:24.761658   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:26.763298   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:29.260393   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:31.262454   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:33.762195   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:35.762487   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:39.113069   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:41.263268   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:43.269341   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	I1212 21:23:44.762609   14160 pod_ready.go:94] pod "coredns-66bc5c9577-r7gwt" is "Ready"
	I1212 21:23:44.762609   14160 pod_ready.go:86] duration metric: took 22.0110788s for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.767351   14160 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.774353   14160 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.774353   14160 pod_ready.go:86] duration metric: took 7.0013ms for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.779541   14160 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.786861   14160 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.786861   14160 pod_ready.go:86] duration metric: took 7.3192ms for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.790455   14160 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.958511   14160 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.958599   14160 pod_ready.go:86] duration metric: took 168.1411ms for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.158399   14160 pod_ready.go:83] waiting for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.557624   14160 pod_ready.go:94] pod "kube-proxy-2pvfg" is "Ready"
	I1212 21:23:45.557624   14160 pod_ready.go:86] duration metric: took 399.2187ms for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.758026   14160 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.157650   14160 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:46.158249   14160 pod_ready.go:86] duration metric: took 400.1515ms for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.158249   14160 pod_ready.go:40] duration metric: took 23.4127353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:46.259466   14160 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 21:23:46.263937   14160 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124600" cluster and "default" namespace by default
	I1212 21:23:57.490599   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:23:57.490599   11500 kubeadm.go:319] 
	I1212 21:23:57.490599   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:23:57.495885   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:23:57.496001   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:23:57.497139   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:23:57.497139   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:23:57.497669   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:23:57.498271   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:23:57.499450   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:23:57.499613   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:23:57.499682   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] OS: Linux
	I1212 21:23:57.499716   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:23:57.500238   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:23:57.500863   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:23:57.501070   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:23:57.501182   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:23:57.504498   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:23:57.506311   11500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:23:57.510650   11500 out.go:252]   - Booting up control plane ...
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:23:57.511664   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000951132s
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	W1212 21:23:57.513649   11500 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:23:57.516687   11500 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:23:57.973632   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:58.000358   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:23:58.005518   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:23:58.022197   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:23:58.022197   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:23:58.026872   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:23:58.039115   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:23:58.043123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:23:58.060114   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:23:58.073122   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:23:58.076119   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:23:58.092125   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.107123   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:23:58.112123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.132133   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:23:58.145128   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:23:58.149118   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:23:58.165115   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:23:58.280707   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:23:58.378404   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:23:58.484549   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:26:50.572138    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:26:50.572138    3280 kubeadm.go:319] 
	I1212 21:26:50.572138    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:26:50.576372    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:26:50.576562    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:26:50.576743    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:26:50.576743    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:26:50.577278    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:26:50.578180    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:26:50.578753    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:26:50.578857    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:26:50.579009    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:26:50.579109    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:26:50.579235    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:26:50.579500    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:26:50.579604    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:26:50.579832    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:26:50.579931    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] OS: Linux
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:26:50.580562    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:26:50.580709    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:26:50.580788    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:26:50.580931    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:26:50.581495    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:26:50.581626    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:26:50.585055    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:26:50.586227    3280 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586357    3280 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:26:50.587005    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:26:50.587734    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:26:50.587927    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:26:50.590646    3280 out.go:252]   - Booting up control plane ...
	I1212 21:26:50.591259    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:26:50.592415    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001153116s
	I1212 21:26:50.592415    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	W1212 21:26:50.593382    3280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:26:50.597384    3280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:26:51.058393    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:26:51.077528    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:26:51.081780    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:26:51.095285    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:26:51.095342    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:26:51.100877    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:26:51.114399    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:26:51.119274    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:26:51.137891    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:26:51.152853    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:26:51.157180    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:26:51.176783    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.190524    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:26:51.194597    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.212488    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:26:51.228065    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:26:51.232039    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:26:51.250057    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:26:51.372297    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:26:51.461499    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:26:51.553708    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:27:59.635671   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:27:59.635671   11500 kubeadm.go:319] 
	I1212 21:27:59.636285   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:27:59.640685   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:27:59.640685   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:27:59.641210   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:27:59.641454   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:27:59.642159   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:27:59.642718   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:27:59.642918   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:27:59.643104   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:27:59.643935   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:27:59.644635   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:27:59.644733   11500 kubeadm.go:319] OS: Linux
	I1212 21:27:59.644880   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:27:59.645003   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:27:59.645114   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:27:59.645225   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:27:59.645998   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:27:59.646240   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:27:59.646401   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:27:59.649353   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:27:59.651191   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:27:59.651254   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:27:59.653668   11500 out.go:252]   - Booting up control plane ...
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:27:59.655077   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:27:59.655321   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:27:59.655492   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00060482s
	I1212 21:27:59.655492   11500 kubeadm.go:319] 
	I1212 21:27:59.655630   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:27:59.655630   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:27:59.655821   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:27:59.655821   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:403] duration metric: took 8m4.8179078s to StartCluster
	I1212 21:27:59.656041   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:27:59.659651   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:27:59.720934   11500 cri.go:89] found id: ""
	I1212 21:27:59.720934   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.720934   11500 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:27:59.720934   11500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:27:59.725183   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:27:59.766585   11500 cri.go:89] found id: ""
	I1212 21:27:59.766585   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.766585   11500 logs.go:284] No container was found matching "etcd"
	I1212 21:27:59.766585   11500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:27:59.771623   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:27:59.811981   11500 cri.go:89] found id: ""
	I1212 21:27:59.811981   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.811981   11500 logs.go:284] No container was found matching "coredns"
	I1212 21:27:59.811981   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:27:59.817402   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:27:59.863867   11500 cri.go:89] found id: ""
	I1212 21:27:59.863867   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.863867   11500 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:27:59.863867   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:27:59.874092   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:27:59.916790   11500 cri.go:89] found id: ""
	I1212 21:27:59.916790   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.916790   11500 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:27:59.916790   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:27:59.921036   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:27:59.972193   11500 cri.go:89] found id: ""
	I1212 21:27:59.972193   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.972193   11500 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:27:59.972193   11500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:27:59.976673   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:28:00.020419   11500 cri.go:89] found id: ""
	I1212 21:28:00.020419   11500 logs.go:282] 0 containers: []
	W1212 21:28:00.020419   11500 logs.go:284] No container was found matching "kindnet"
	I1212 21:28:00.020419   11500 logs.go:123] Gathering logs for container status ...
	I1212 21:28:00.020419   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:28:00.075393   11500 logs.go:123] Gathering logs for kubelet ...
	I1212 21:28:00.075393   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:28:00.136556   11500 logs.go:123] Gathering logs for dmesg ...
	I1212 21:28:00.136556   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:28:00.180601   11500 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:28:00.180601   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:28:00.264769   11500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:28:00.264769   11500 logs.go:123] Gathering logs for Docker ...
	I1212 21:28:00.264769   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:28:00.295184   11500 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:28:00.295286   11500 out.go:285] * 
	W1212 21:28:00.295361   11500 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.295361   11500 out.go:285] * 
	W1212 21:28:00.297172   11500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:28:00.306876   11500 out.go:203] 
	W1212 21:28:00.310659   11500 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.310880   11500 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:28:00.310880   11500 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:28:00.312599   11500 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896422880Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896514789Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896525790Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896530891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896538492Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896562994Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896607799Z" level=info msg="Initializing buildkit"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.063364015Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070100507Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070204618Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070271524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070381736Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:05.342496   11194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:05.343657   11194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:05.344490   11194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:05.347122   11194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:05.348465   11194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 21:23] CPU: 13 PID: 434005 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f45063b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f45063b9af6.
	[  +0.000001] RSP: 002b:00007fffb2f7a7b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.884221] CPU: 10 PID: 434152 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1ab5b6bb20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f1ab5b6baf6.
	[  +0.000001] RSP: 002b:00007fffe51bbd80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +3.005046] tmpfs: Unknown parameter 'noswap'
	[Dec12 21:24] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:28:05 up  2:29,  0 user,  load average: 0.64, 2.64, 3.69
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:02 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:02 no-preload-285600 kubelet[11021]: E1212 21:28:02.896135   11021 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:02 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:03 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 12 21:28:03 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:03 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:03 no-preload-285600 kubelet[11046]: E1212 21:28:03.640364   11046 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:03 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:03 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:04 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 12 21:28:04 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:04 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:04 no-preload-285600 kubelet[11074]: E1212 21:28:04.388353   11074 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:04 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:04 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:05 no-preload-285600 kubelet[11132]: E1212 21:28:05.138529   11132 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 6 (588.0879ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:06.459773    5324 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:19:18.519800705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5af3f413668a0d538b65d8f61bdb8f76c9d3fffc039f5c39eab88c8e538214f8",
	            "SandboxKey": "/var/run/docker/netns/5af3f413668a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "41d46d4540a8534435610e3455fd03f86fe030069ea47ea0bc7248badc5ae81c",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
E1212 21:28:06.691478   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 6 (607.271ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:07.158133    6804 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.0944462s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ stop    │ -p embed-certs-729900 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-729900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ start   │ -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:22:58
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:22:58.216335   14160 out.go:360] Setting OutFile to fd 1132 ...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.266331   14160 out.go:374] Setting ErrFile to fd 1508...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.280322   14160 out.go:368] Setting JSON to false
	I1212 21:22:58.283341   14160 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8716,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:22:58.283341   14160 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:22:58.287338   14160 out.go:179] * [default-k8s-diff-port-124600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:22:58.290341   14160 notify.go:221] Checking for updates...
	I1212 21:22:58.292332   14160 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:22:58.294328   14160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:22:58.296340   14160 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:22:58.298340   14160 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:22:58.301322   14160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:22:58.304323   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:58.305325   14160 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:22:58.434944   14160 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:22:58.438949   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.676253   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:58.655092827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.680239   14160 out.go:179] * Using the docker driver based on existing profile
	I1212 21:22:58.682239   14160 start.go:309] selected driver: docker
	I1212 21:22:58.682239   14160 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.682239   14160 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:22:58.732240   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.965241   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:100 SystemTime:2025-12-12 21:22:58.948719453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.966243   14160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:22:58.966243   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:22:58.966243   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:58.966243   14160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.968243   14160 out.go:179] * Starting "default-k8s-diff-port-124600" primary control-plane node in "default-k8s-diff-port-124600" cluster
	I1212 21:22:58.972244   14160 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:22:58.974236   14160 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:22:58.977243   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:22:58.977243   14160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:22:58.977243   14160 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1212 21:22:58.977243   14160 cache.go:65] Caching tarball of preloaded images
	I1212 21:22:58.977243   14160 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:22:58.978245   14160 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1212 21:22:58.978245   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.059257   14160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:22:59.059257   14160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:22:59.059257   14160 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:22:59.059257   14160 start.go:360] acquireMachinesLock for default-k8s-diff-port-124600: {Name:mk780a32308b64368d3930722f9e881df08c3504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:22:59.059257   14160 start.go:364] duration metric: took 0s to acquireMachinesLock for "default-k8s-diff-port-124600"
	I1212 21:22:59.059257   14160 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:22:59.059257   14160 fix.go:54] fixHost starting: 
	I1212 21:22:59.066252   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.129461   14160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124600: state=Stopped err=<nil>
	W1212 21:22:59.129461   14160 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:22:59.133088   14160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124600" ...
	I1212 21:22:59.136686   14160 cli_runner.go:164] Run: docker start default-k8s-diff-port-124600
	I1212 21:22:59.862889   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.919149   14160 kic.go:430] container "default-k8s-diff-port-124600" state is running.
	I1212 21:22:59.924156   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:22:59.977149   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.979157   14160 machine.go:94] provisionDockerMachine start ...
	I1212 21:22:59.982162   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:00.038158   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:00.038158   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:00.038158   14160 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:23:00.040164   14160 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:23:03.234044   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.234044   14160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124600"
	I1212 21:23:03.237963   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.294306   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.294306   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.294306   14160 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124600 && echo "default-k8s-diff-port-124600" | sudo tee /etc/hostname
	I1212 21:23:03.491471   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.495244   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.552274   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.552715   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.552715   14160 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124600/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:23:03.726759   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:03.726759   14160 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:23:03.726759   14160 ubuntu.go:190] setting up certificates
	I1212 21:23:03.726759   14160 provision.go:84] configureAuth start
	I1212 21:23:03.730596   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:03.786827   14160 provision.go:143] copyHostCerts
	I1212 21:23:03.787473   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:23:03.787473   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:23:03.787473   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:23:03.788324   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:23:03.788324   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:23:03.788845   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:23:03.789576   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:23:03.789576   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:23:03.789576   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:23:03.790404   14160 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.default-k8s-diff-port-124600 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-124600 localhost minikube]
	I1212 21:23:04.028472   14160 provision.go:177] copyRemoteCerts
	I1212 21:23:04.032783   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:23:04.035720   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.090685   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:04.220108   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:23:04.251841   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 21:23:04.283040   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:23:04.313548   14160 provision.go:87] duration metric: took 586.7803ms to configureAuth
	I1212 21:23:04.313548   14160 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:23:04.313548   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:04.319686   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.374458   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.375110   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.375110   14160 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:23:04.546890   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:23:04.546890   14160 ubuntu.go:71] root file system type: overlay
	I1212 21:23:04.546890   14160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:23:04.551279   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.607300   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.607818   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.607929   14160 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:23:04.799190   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:23:04.802868   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.862025   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.862025   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.862025   14160 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:23:05.043356   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:05.043406   14160 machine.go:97] duration metric: took 5.0641684s to provisionDockerMachine
	I1212 21:23:05.043449   14160 start.go:293] postStartSetup for "default-k8s-diff-port-124600" (driver="docker")
	I1212 21:23:05.043449   14160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:23:05.047805   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:23:05.051418   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.110898   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.255814   14160 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:23:05.264052   14160 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:23:05.264052   14160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:23:05.264052   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:23:05.264766   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:23:05.265608   14160 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:23:05.270881   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:23:05.288263   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:23:05.316332   14160 start.go:296] duration metric: took 272.8783ms for postStartSetup
	I1212 21:23:05.320908   14160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:23:05.324174   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.375311   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.511900   14160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:23:05.522084   14160 fix.go:56] duration metric: took 6.4622006s for fixHost
	I1212 21:23:05.522084   14160 start.go:83] releasing machines lock for "default-k8s-diff-port-124600", held for 6.4627242s
	I1212 21:23:05.525524   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:05.580943   14160 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:23:05.584825   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.585512   14160 ssh_runner.go:195] Run: cat /version.json
	I1212 21:23:05.589557   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.645453   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.647465   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	W1212 21:23:05.764866   14160 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:23:05.777208   14160 ssh_runner.go:195] Run: systemctl --version
	I1212 21:23:05.795091   14160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:23:05.805053   14160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:23:05.808995   14160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:23:05.822377   14160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:23:05.822377   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:05.822377   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:05.822377   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:05.850571   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:23:05.860918   14160 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:23:05.860962   14160 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:23:05.870950   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:23:05.886032   14160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:23:05.890300   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:23:05.911690   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.931881   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:23:05.951355   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.972217   14160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:23:05.989654   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:23:06.008555   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:23:06.029580   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:23:06.051557   14160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:23:06.068272   14160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:23:06.088555   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:06.232851   14160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:23:06.395580   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:06.396135   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:06.401664   14160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:23:06.427774   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.449987   14160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:23:06.530054   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.552557   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:23:06.573212   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:06.601206   14160 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:23:06.613316   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:23:06.629256   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:23:06.655736   14160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:23:06.808191   14160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:23:06.948697   14160 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:23:06.949225   14160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:23:06.973857   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:23:06.995178   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:07.159801   14160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:23:08.387280   14160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2274602s)
	I1212 21:23:08.392059   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:23:08.414696   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:23:08.439024   14160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:23:08.465914   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:08.488326   14160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:23:08.636890   14160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:23:08.775314   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:08.926196   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:23:08.950709   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:23:08.974437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:09.109676   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:23:09.227758   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:09.246593   14160 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:23:09.251694   14160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:23:09.259250   14160 start.go:564] Will wait 60s for crictl version
	I1212 21:23:09.263473   14160 ssh_runner.go:195] Run: which crictl
	I1212 21:23:09.274454   14160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:23:09.319908   14160 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:23:09.323619   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.371068   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.415300   14160 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1212 21:23:09.420229   14160 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-124600 dig +short host.docker.internal
	I1212 21:23:09.561538   14160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:23:09.566410   14160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:23:09.573305   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.594186   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:09.649016   14160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:23:09.649995   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:23:09.652859   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.686348   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.686348   14160 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:23:09.689834   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.722637   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.722717   14160 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:23:09.722717   14160 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 docker true true} ...
	I1212 21:23:09.722968   14160 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-124600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:23:09.726467   14160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:23:09.804166   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:23:09.804166   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:23:09.804166   14160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:23:09.804166   14160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124600 NodeName:default-k8s-diff-port-124600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:23:09.804776   14160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-124600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:23:09.809184   14160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:23:09.822880   14160 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:23:09.827517   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:23:09.843159   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1212 21:23:09.865173   14160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:23:09.883664   14160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 21:23:09.910110   14160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:23:09.917548   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.936437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:10.076798   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:10.099969   14160 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600 for IP: 192.168.76.2
	I1212 21:23:10.099969   14160 certs.go:195] generating shared ca certs ...
	I1212 21:23:10.099969   14160 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:23:10.100633   14160 certs.go:257] generating profile certs ...
	I1212 21:23:10.101754   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\client.key
	I1212 21:23:10.102187   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key.c1ba716d
	I1212 21:23:10.102537   14160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:23:10.103938   14160 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:23:10.104497   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:23:10.104785   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:23:10.105145   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:23:10.105904   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:23:10.107597   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:23:10.138041   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:23:10.169285   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:23:10.199761   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:23:10.228706   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:23:10.259268   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:23:10.319083   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:23:10.408082   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:23:10.504827   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:23:10.535027   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:23:10.606848   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:23:10.641191   14160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:23:10.699040   14160 ssh_runner.go:195] Run: openssl version
	I1212 21:23:10.713021   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.729389   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:23:10.746196   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.754411   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.759187   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.807227   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:23:10.824046   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.841672   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:23:10.866000   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.875373   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.880699   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.937889   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:23:10.955118   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:23:10.975153   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:23:10.995392   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.003494   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.008922   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.057570   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:23:11.076453   14160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:23:11.089632   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:23:11.142247   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:23:11.218728   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:23:11.416273   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:23:11.544319   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:23:11.636634   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:23:11.685985   14160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:23:11.690036   14160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:23:11.724509   14160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:23:11.737581   14160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:23:11.737638   14160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:23:11.743506   14160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:23:11.757047   14160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:23:11.761811   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.815778   14160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.816493   14160 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124600" cluster setting kubeconfig missing "default-k8s-diff-port-124600" context setting]
	I1212 21:23:11.816493   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.838352   14160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:23:11.855027   14160 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:23:11.855027   14160 kubeadm.go:602] duration metric: took 117.3468ms to restartPrimaryControlPlane
	I1212 21:23:11.855027   14160 kubeadm.go:403] duration metric: took 169.0394ms to StartCluster
	I1212 21:23:11.855027   14160 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.855027   14160 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.856184   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.856963   14160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:23:11.856963   14160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:23:11.856963   14160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:11.856963   14160 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124600"
	W1212 21:23:11.857487   14160 addons.go:248] addon metrics-server should already be in state true
	I1212 21:23:11.857567   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.857598   14160 addons.go:248] addon storage-provisioner should already be in state true
	W1212 21:23:11.857598   14160 addons.go:248] addon dashboard should already be in state true
	I1212 21:23:11.857767   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.857819   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.863101   14160 out.go:179] * Verifying Kubernetes components...
	I1212 21:23:11.866976   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.868177   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870310   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870461   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.871764   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:11.932064   14160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:23:11.934081   14160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:23:11.942073   14160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:11.942073   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:23:11.944073   14160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:23:11.945075   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.947064   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:23:11.947064   14160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:23:11.951064   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.953072   14160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.953072   14160 addons.go:248] addon default-storageclass should already be in state true
	I1212 21:23:11.953072   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.962072   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.977069   14160 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:23:11.983067   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:23:11.983067   14160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:23:11.988070   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.005074   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.009067   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.019066   14160 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.019066   14160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:23:12.022067   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.046070   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.073066   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.092898   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:12.116354   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.165367   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:23:12.165367   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:23:12.167359   14160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:12.169365   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:12.186351   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:23:12.186351   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:23:12.204423   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:23:12.204423   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:23:12.207045   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:23:12.207045   14160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:23:12.230521   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:23:12.230521   14160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:23:12.231517   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:23:12.231517   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:23:12.233521   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1212 21:23:12.386842   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.386922   14160 retry.go:31] will retry after 278.156141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.390854   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:23:12.390854   14160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:23:12.400109   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.415390   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:23:12.415480   14160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:23:12.491717   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:23:12.491717   14160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:23:12.492530   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.492530   14160 retry.go:31] will retry after 256.197463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.512893   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:12.512893   14160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:23:12.538803   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:12.551683   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.551683   14160 retry.go:31] will retry after 265.384209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:12.644080   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.644080   14160 retry.go:31] will retry after 354.535598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.669419   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:12.752922   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.752922   14160 retry.go:31] will retry after 290.803282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.753921   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.823384   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1212 21:23:12.917382   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.917460   14160 retry.go:31] will retry after 300.691587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.004960   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:13.048937   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:13.093941   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.094016   14160 retry.go:31] will retry after 506.158576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.223508   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387360   14160 retry.go:31] will retry after 272.283438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387397   14160 retry.go:31] will retry after 368.00551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.607806   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:13.665164   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:13.697618   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.698562   14160 retry.go:31] will retry after 669.122462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.760538   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:14.372987   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:17.195846   14160 node_ready.go:49] node "default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:17.195955   14160 node_ready.go:38] duration metric: took 5.028515s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:17.195955   14160 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:23:17.200813   14160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:23:20.596132   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.9882139s)
	I1212 21:23:20.596672   14160 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.3422701s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.2468976s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.8066778s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.634458s)
	I1212 21:23:21.007551   14160 api_server.go:72] duration metric: took 9.150442s to wait for apiserver process to appear ...
	I1212 21:23:21.007551   14160 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:23:21.007551   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.010167   14160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124600 addons enable metrics-server
	
	I1212 21:23:21.099582   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.100442   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:21.196982   14160 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1212 21:23:21.200574   14160 addons.go:530] duration metric: took 9.3434618s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1212 21:23:21.508465   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.591838   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.591838   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.008494   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.019209   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:22.019209   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.507999   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.600220   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 200:
	ok
	I1212 21:23:22.604091   14160 api_server.go:141] control plane version: v1.34.2
	I1212 21:23:22.604864   14160 api_server.go:131] duration metric: took 1.5972868s to wait for apiserver health ...
	I1212 21:23:22.604864   14160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:23:22.612203   14160 system_pods.go:59] 8 kube-system pods found
	I1212 21:23:22.612251   14160 system_pods.go:61] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.612251   14160 system_pods.go:61] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.612251   14160 system_pods.go:61] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.612251   14160 system_pods.go:61] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.612251   14160 system_pods.go:74] duration metric: took 7.3871ms to wait for pod list to return data ...
	I1212 21:23:22.612251   14160 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:23:22.616756   14160 default_sa.go:45] found service account: "default"
	I1212 21:23:22.616756   14160 default_sa.go:55] duration metric: took 4.5056ms for default service account to be created ...
	I1212 21:23:22.616756   14160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:23:22.695042   14160 system_pods.go:86] 8 kube-system pods found
	I1212 21:23:22.695042   14160 system_pods.go:89] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.695105   14160 system_pods.go:89] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.695105   14160 system_pods.go:89] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.695168   14160 system_pods.go:89] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.695168   14160 system_pods.go:126] duration metric: took 78.4107ms to wait for k8s-apps to be running ...
	I1212 21:23:22.695198   14160 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:23:22.700468   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:22.725193   14160 system_svc.go:56] duration metric: took 29.0191ms WaitForService to wait for kubelet
	I1212 21:23:22.725193   14160 kubeadm.go:587] duration metric: took 10.868056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:23:22.725193   14160 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:23:22.732161   14160 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1212 21:23:22.732201   14160 node_conditions.go:123] node cpu capacity is 16
	I1212 21:23:22.732201   14160 node_conditions.go:105] duration metric: took 7.0085ms to run NodePressure ...
	I1212 21:23:22.732201   14160 start.go:242] waiting for startup goroutines ...
	I1212 21:23:22.732201   14160 start.go:247] waiting for cluster config update ...
	I1212 21:23:22.732201   14160 start.go:256] writing updated cluster config ...
	I1212 21:23:22.737899   14160 ssh_runner.go:195] Run: rm -f paused
	I1212 21:23:22.745044   14160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:22.751178   14160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:23:24.761658   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:26.763298   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:29.260393   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:31.262454   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:33.762195   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:35.762487   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:39.113069   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:41.263268   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:43.269341   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	I1212 21:23:44.762609   14160 pod_ready.go:94] pod "coredns-66bc5c9577-r7gwt" is "Ready"
	I1212 21:23:44.762609   14160 pod_ready.go:86] duration metric: took 22.0110788s for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.767351   14160 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.774353   14160 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.774353   14160 pod_ready.go:86] duration metric: took 7.0013ms for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.779541   14160 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.786861   14160 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.786861   14160 pod_ready.go:86] duration metric: took 7.3192ms for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.790455   14160 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.958511   14160 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.958599   14160 pod_ready.go:86] duration metric: took 168.1411ms for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.158399   14160 pod_ready.go:83] waiting for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.557624   14160 pod_ready.go:94] pod "kube-proxy-2pvfg" is "Ready"
	I1212 21:23:45.557624   14160 pod_ready.go:86] duration metric: took 399.2187ms for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.758026   14160 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.157650   14160 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:46.158249   14160 pod_ready.go:86] duration metric: took 400.1515ms for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.158249   14160 pod_ready.go:40] duration metric: took 23.4127353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:46.259466   14160 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 21:23:46.263937   14160 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124600" cluster and "default" namespace by default
	I1212 21:23:57.490599   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:23:57.490599   11500 kubeadm.go:319] 
	I1212 21:23:57.490599   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:23:57.495885   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:23:57.496001   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:23:57.497139   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:23:57.497139   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:23:57.497669   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:23:57.498271   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:23:57.499450   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:23:57.499613   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:23:57.499682   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] OS: Linux
	I1212 21:23:57.499716   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:23:57.500238   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:23:57.500863   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:23:57.501070   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:23:57.501182   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:23:57.504498   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:23:57.506311   11500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:23:57.510650   11500 out.go:252]   - Booting up control plane ...
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:23:57.511664   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000951132s
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	W1212 21:23:57.513649   11500 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:23:57.516687   11500 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:23:57.973632   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:58.000358   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:23:58.005518   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:23:58.022197   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:23:58.022197   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:23:58.026872   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:23:58.039115   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:23:58.043123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:23:58.060114   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:23:58.073122   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:23:58.076119   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:23:58.092125   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.107123   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:23:58.112123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.132133   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:23:58.145128   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:23:58.149118   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:23:58.165115   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:23:58.280707   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:23:58.378404   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:23:58.484549   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:26:50.572138    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:26:50.572138    3280 kubeadm.go:319] 
	I1212 21:26:50.572138    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:26:50.576372    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:26:50.576562    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:26:50.576743    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:26:50.576743    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:26:50.577278    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:26:50.578180    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:26:50.578753    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:26:50.578857    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:26:50.579009    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:26:50.579109    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:26:50.579235    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:26:50.579500    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:26:50.579604    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:26:50.579832    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:26:50.579931    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] OS: Linux
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:26:50.580562    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:26:50.580709    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:26:50.580788    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:26:50.580931    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:26:50.581495    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:26:50.581626    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:26:50.585055    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:26:50.586227    3280 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586357    3280 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:26:50.587005    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:26:50.587734    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:26:50.587927    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:26:50.590646    3280 out.go:252]   - Booting up control plane ...
	I1212 21:26:50.591259    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:26:50.592415    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001153116s
	I1212 21:26:50.592415    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	W1212 21:26:50.593382    3280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:26:50.597384    3280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:26:51.058393    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:26:51.077528    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:26:51.081780    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:26:51.095285    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:26:51.095342    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:26:51.100877    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:26:51.114399    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:26:51.119274    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:26:51.137891    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:26:51.152853    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:26:51.157180    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:26:51.176783    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.190524    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:26:51.194597    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.212488    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:26:51.228065    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:26:51.232039    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:26:51.250057    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:26:51.372297    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:26:51.461499    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:26:51.553708    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:27:59.635671   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:27:59.635671   11500 kubeadm.go:319] 
	I1212 21:27:59.636285   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:27:59.640685   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:27:59.640685   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:27:59.641210   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:27:59.641454   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:27:59.642159   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:27:59.642718   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:27:59.642918   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:27:59.643104   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:27:59.643935   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:27:59.644635   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:27:59.644733   11500 kubeadm.go:319] OS: Linux
	I1212 21:27:59.644880   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:27:59.645003   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:27:59.645114   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:27:59.645225   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:27:59.645998   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:27:59.646240   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:27:59.646401   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:27:59.649353   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:27:59.651191   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:27:59.651254   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:27:59.653668   11500 out.go:252]   - Booting up control plane ...
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:27:59.655077   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:27:59.655321   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:27:59.655492   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00060482s
	I1212 21:27:59.655492   11500 kubeadm.go:319] 
	I1212 21:27:59.655630   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:27:59.655630   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:27:59.655821   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:27:59.655821   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:403] duration metric: took 8m4.8179078s to StartCluster
	I1212 21:27:59.656041   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:27:59.659651   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:27:59.720934   11500 cri.go:89] found id: ""
	I1212 21:27:59.720934   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.720934   11500 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:27:59.720934   11500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:27:59.725183   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:27:59.766585   11500 cri.go:89] found id: ""
	I1212 21:27:59.766585   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.766585   11500 logs.go:284] No container was found matching "etcd"
	I1212 21:27:59.766585   11500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:27:59.771623   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:27:59.811981   11500 cri.go:89] found id: ""
	I1212 21:27:59.811981   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.811981   11500 logs.go:284] No container was found matching "coredns"
	I1212 21:27:59.811981   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:27:59.817402   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:27:59.863867   11500 cri.go:89] found id: ""
	I1212 21:27:59.863867   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.863867   11500 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:27:59.863867   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:27:59.874092   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:27:59.916790   11500 cri.go:89] found id: ""
	I1212 21:27:59.916790   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.916790   11500 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:27:59.916790   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:27:59.921036   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:27:59.972193   11500 cri.go:89] found id: ""
	I1212 21:27:59.972193   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.972193   11500 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:27:59.972193   11500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:27:59.976673   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:28:00.020419   11500 cri.go:89] found id: ""
	I1212 21:28:00.020419   11500 logs.go:282] 0 containers: []
	W1212 21:28:00.020419   11500 logs.go:284] No container was found matching "kindnet"
	I1212 21:28:00.020419   11500 logs.go:123] Gathering logs for container status ...
	I1212 21:28:00.020419   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:28:00.075393   11500 logs.go:123] Gathering logs for kubelet ...
	I1212 21:28:00.075393   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:28:00.136556   11500 logs.go:123] Gathering logs for dmesg ...
	I1212 21:28:00.136556   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:28:00.180601   11500 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:28:00.180601   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:28:00.264769   11500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:28:00.264769   11500 logs.go:123] Gathering logs for Docker ...
	I1212 21:28:00.264769   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:28:00.295184   11500 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:28:00.295286   11500 out.go:285] * 
	W1212 21:28:00.295361   11500 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.295361   11500 out.go:285] * 
	W1212 21:28:00.297172   11500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:28:00.306876   11500 out.go:203] 
	W1212 21:28:00.310659   11500 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.310880   11500 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:28:00.310880   11500 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:28:00.312599   11500 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896422880Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896514789Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896525790Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896530891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896538492Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896562994Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896607799Z" level=info msg="Initializing buildkit"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.063364015Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070100507Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070204618Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070271524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070381736Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:08.142081   11390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:08.142540   11390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:08.157303   11390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:08.158954   11390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:08.160578   11390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 21:23] CPU: 13 PID: 434005 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f45063b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f45063b9af6.
	[  +0.000001] RSP: 002b:00007fffb2f7a7b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.884221] CPU: 10 PID: 434152 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1ab5b6bb20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f1ab5b6baf6.
	[  +0.000001] RSP: 002b:00007fffe51bbd80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +3.005046] tmpfs: Unknown parameter 'noswap'
	[Dec12 21:24] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:28:08 up  2:29,  0 user,  load average: 0.75, 2.62, 3.69
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:05 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:05 no-preload-285600 kubelet[11213]: E1212 21:28:05.888041   11213 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:05 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:06 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 12 21:28:06 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:06 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:06 no-preload-285600 kubelet[11243]: E1212 21:28:06.646313   11243 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:06 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:06 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:07 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 12 21:28:07 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:07 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:07 no-preload-285600 kubelet[11272]: E1212 21:28:07.399513   11272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:07 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:07 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:28:08 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 12 21:28:08 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:08 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:28:08 no-preload-285600 kubelet[11379]: E1212 21:28:08.129131   11379 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:28:08 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:28:08 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 6 (596.8484ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:28:09.220678     784 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (5.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (119.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 21:28:12.307521   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:28:15.474997   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:28:15.578032   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:28:17.510874   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:28:28.884301   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:28:56.437184   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:29:54.842637   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.6454735s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_7.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-285600 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-285600 describe deploy/metrics-server -n kube-system: exit status 1 (93.3472ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-285600" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-285600 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:19:18.519800705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5af3f413668a0d538b65d8f61bdb8f76c9d3fffc039f5c39eab88c8e538214f8",
	            "SandboxKey": "/var/run/docker/netns/5af3f413668a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "41d46d4540a8534435610e3455fd03f86fe030069ea47ea0bc7248badc5ae81c",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 6 (598.5713ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:30:06.647710    2844 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.1362067s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ stop    │ -p embed-certs-729900 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-729900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:21 UTC │
	│ start   │ -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:21 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:22:58
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:22:58.216335   14160 out.go:360] Setting OutFile to fd 1132 ...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.266331   14160 out.go:374] Setting ErrFile to fd 1508...
	I1212 21:22:58.266331   14160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:22:58.280322   14160 out.go:368] Setting JSON to false
	I1212 21:22:58.283341   14160 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8716,"bootTime":1765565862,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:22:58.283341   14160 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:22:58.287338   14160 out.go:179] * [default-k8s-diff-port-124600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:22:58.290341   14160 notify.go:221] Checking for updates...
	I1212 21:22:58.292332   14160 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:22:58.294328   14160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:22:58.296340   14160 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:22:58.298340   14160 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:22:58.301322   14160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:22:58.304323   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:22:58.305325   14160 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:22:58.434944   14160 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:22:58.438949   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.676253   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-12 21:22:58.655092827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.680239   14160 out.go:179] * Using the docker driver based on existing profile
	I1212 21:22:58.682239   14160 start.go:309] selected driver: docker
	I1212 21:22:58.682239   14160 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.682239   14160 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:22:58.732240   14160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:22:58.965241   14160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:100 SystemTime:2025-12-12 21:22:58.948719453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:22:58.966243   14160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:22:58.966243   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:22:58.966243   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:22:58.966243   14160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:22:58.968243   14160 out.go:179] * Starting "default-k8s-diff-port-124600" primary control-plane node in "default-k8s-diff-port-124600" cluster
	I1212 21:22:58.972244   14160 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:22:58.974236   14160 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:22:58.977243   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:22:58.977243   14160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:22:58.977243   14160 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1212 21:22:58.977243   14160 cache.go:65] Caching tarball of preloaded images
	I1212 21:22:58.977243   14160 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:22:58.978245   14160 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1212 21:22:58.978245   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.059257   14160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:22:59.059257   14160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:22:59.059257   14160 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:22:59.059257   14160 start.go:360] acquireMachinesLock for default-k8s-diff-port-124600: {Name:mk780a32308b64368d3930722f9e881df08c3504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:22:59.059257   14160 start.go:364] duration metric: took 0s to acquireMachinesLock for "default-k8s-diff-port-124600"
	I1212 21:22:59.059257   14160 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:22:59.059257   14160 fix.go:54] fixHost starting: 
	I1212 21:22:59.066252   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.129461   14160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-124600: state=Stopped err=<nil>
	W1212 21:22:59.129461   14160 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:22:59.133088   14160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-124600" ...
	I1212 21:22:59.136686   14160 cli_runner.go:164] Run: docker start default-k8s-diff-port-124600
	I1212 21:22:59.862889   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:22:59.919149   14160 kic.go:430] container "default-k8s-diff-port-124600" state is running.
	I1212 21:22:59.924156   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:22:59.977149   14160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\config.json ...
	I1212 21:22:59.979157   14160 machine.go:94] provisionDockerMachine start ...
	I1212 21:22:59.982162   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:00.038158   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:00.038158   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:00.038158   14160 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:23:00.040164   14160 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:23:03.234044   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.234044   14160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-124600"
	I1212 21:23:03.237963   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.294306   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.294306   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.294306   14160 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-124600 && echo "default-k8s-diff-port-124600" | sudo tee /etc/hostname
	I1212 21:23:03.491471   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-124600
	
	I1212 21:23:03.495244   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:03.552274   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:03.552715   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:03.552715   14160 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-124600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-124600/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-124600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:23:03.726759   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:03.726759   14160 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:23:03.726759   14160 ubuntu.go:190] setting up certificates
	I1212 21:23:03.726759   14160 provision.go:84] configureAuth start
	I1212 21:23:03.730596   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:03.786827   14160 provision.go:143] copyHostCerts
	I1212 21:23:03.787473   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:23:03.787473   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:23:03.787473   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:23:03.788324   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:23:03.788324   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:23:03.788845   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:23:03.789576   14160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:23:03.789576   14160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:23:03.789576   14160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:23:03.790404   14160 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.default-k8s-diff-port-124600 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-124600 localhost minikube]
	I1212 21:23:04.028472   14160 provision.go:177] copyRemoteCerts
	I1212 21:23:04.032783   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:23:04.035720   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.090685   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:04.220108   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:23:04.251841   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 21:23:04.283040   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:23:04.313548   14160 provision.go:87] duration metric: took 586.7803ms to configureAuth
	I1212 21:23:04.313548   14160 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:23:04.313548   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:04.319686   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.374458   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.375110   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.375110   14160 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:23:04.546890   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:23:04.546890   14160 ubuntu.go:71] root file system type: overlay
	I1212 21:23:04.546890   14160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:23:04.551279   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.607300   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.607818   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.607929   14160 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:23:04.799190   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:23:04.802868   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:04.862025   14160 main.go:143] libmachine: Using SSH client type: native
	I1212 21:23:04.862025   14160 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62677 <nil> <nil>}
	I1212 21:23:04.862025   14160 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:23:05.043356   14160 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:23:05.043406   14160 machine.go:97] duration metric: took 5.0641684s to provisionDockerMachine
	I1212 21:23:05.043449   14160 start.go:293] postStartSetup for "default-k8s-diff-port-124600" (driver="docker")
	I1212 21:23:05.043449   14160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:23:05.047805   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:23:05.051418   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.110898   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.255814   14160 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:23:05.264052   14160 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:23:05.264052   14160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:23:05.264052   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:23:05.264766   14160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:23:05.265608   14160 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:23:05.270881   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:23:05.288263   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:23:05.316332   14160 start.go:296] duration metric: took 272.8783ms for postStartSetup
	I1212 21:23:05.320908   14160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:23:05.324174   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.375311   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.511900   14160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:23:05.522084   14160 fix.go:56] duration metric: took 6.4622006s for fixHost
	I1212 21:23:05.522084   14160 start.go:83] releasing machines lock for "default-k8s-diff-port-124600", held for 6.4627242s
	I1212 21:23:05.525524   14160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-124600
	I1212 21:23:05.580943   14160 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:23:05.584825   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.585512   14160 ssh_runner.go:195] Run: cat /version.json
	I1212 21:23:05.589557   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:05.645453   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:05.647465   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	W1212 21:23:05.764866   14160 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:23:05.777208   14160 ssh_runner.go:195] Run: systemctl --version
	I1212 21:23:05.795091   14160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:23:05.805053   14160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:23:05.808995   14160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:23:05.822377   14160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:23:05.822377   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:05.822377   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:05.822377   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:05.850571   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:23:05.860918   14160 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:23:05.860962   14160 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:23:05.870950   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:23:05.886032   14160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:23:05.890300   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:23:05.911690   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.931881   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:23:05.951355   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:23:05.972217   14160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:23:05.989654   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:23:06.008555   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:23:06.029580   14160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:23:06.051557   14160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:23:06.068272   14160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:23:06.088555   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:06.232851   14160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:23:06.395580   14160 start.go:496] detecting cgroup driver to use...
	I1212 21:23:06.396135   14160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:23:06.401664   14160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:23:06.427774   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.449987   14160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:23:06.530054   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:23:06.552557   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:23:06.573212   14160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:23:06.601206   14160 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:23:06.613316   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:23:06.629256   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:23:06.655736   14160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:23:06.808191   14160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:23:06.948697   14160 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:23:06.949225   14160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:23:06.973857   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:23:06.995178   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:07.159801   14160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:23:08.387280   14160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2274602s)
	I1212 21:23:08.392059   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:23:08.414696   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:23:08.439024   14160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:23:08.465914   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:08.488326   14160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:23:08.636890   14160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:23:08.775314   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:08.926196   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:23:08.950709   14160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:23:08.974437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:09.109676   14160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:23:09.227758   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:23:09.246593   14160 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:23:09.251694   14160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:23:09.259250   14160 start.go:564] Will wait 60s for crictl version
	I1212 21:23:09.263473   14160 ssh_runner.go:195] Run: which crictl
	I1212 21:23:09.274454   14160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:23:09.319908   14160 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:23:09.323619   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.371068   14160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:23:09.415300   14160 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1212 21:23:09.420229   14160 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-124600 dig +short host.docker.internal
	I1212 21:23:09.561538   14160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:23:09.566410   14160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:23:09.573305   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.594186   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:09.649016   14160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:23:09.649995   14160 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 21:23:09.652859   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.686348   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.686348   14160 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:23:09.689834   14160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:23:09.722637   14160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1212 21:23:09.722717   14160 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:23:09.722717   14160 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 docker true true} ...
	I1212 21:23:09.722968   14160 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-124600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:23:09.726467   14160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:23:09.804166   14160 cni.go:84] Creating CNI manager for ""
	I1212 21:23:09.804166   14160 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:23:09.804166   14160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:23:09.804166   14160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-124600 NodeName:default-k8s-diff-port-124600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:23:09.804776   14160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-124600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:23:09.809184   14160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:23:09.822880   14160 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:23:09.827517   14160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:23:09.843159   14160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1212 21:23:09.865173   14160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:23:09.883664   14160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 21:23:09.910110   14160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:23:09.917548   14160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:23:09.936437   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:10.076798   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:10.099969   14160 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600 for IP: 192.168.76.2
	I1212 21:23:10.099969   14160 certs.go:195] generating shared ca certs ...
	I1212 21:23:10.099969   14160 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:23:10.100633   14160 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:23:10.100633   14160 certs.go:257] generating profile certs ...
	I1212 21:23:10.101754   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\client.key
	I1212 21:23:10.102187   14160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key.c1ba716d
	I1212 21:23:10.102537   14160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:23:10.103938   14160 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:23:10.103938   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:23:10.104497   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:23:10.104785   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:23:10.105145   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:23:10.105904   14160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:23:10.107597   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:23:10.138041   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:23:10.169285   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:23:10.199761   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:23:10.228706   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:23:10.259268   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:23:10.319083   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:23:10.408082   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-124600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:23:10.504827   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:23:10.535027   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:23:10.606848   14160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:23:10.641191   14160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:23:10.699040   14160 ssh_runner.go:195] Run: openssl version
	I1212 21:23:10.713021   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.729389   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:23:10.746196   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.754411   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.759187   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:23:10.807227   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:23:10.824046   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.841672   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:23:10.866000   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.875373   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.880699   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:23:10.937889   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:23:10.955118   14160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:23:10.975153   14160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:23:10.995392   14160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.003494   14160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.008922   14160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:23:11.057570   14160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:23:11.076453   14160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:23:11.089632   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:23:11.142247   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:23:11.218728   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:23:11.416273   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:23:11.544319   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:23:11.636634   14160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:23:11.685985   14160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-124600 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:23:11.690036   14160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:23:11.724509   14160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:23:11.737581   14160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:23:11.737638   14160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:23:11.743506   14160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:23:11.757047   14160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:23:11.761811   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.815778   14160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-124600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.816493   14160 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-124600" cluster setting kubeconfig missing "default-k8s-diff-port-124600" context setting]
	I1212 21:23:11.816493   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.838352   14160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:23:11.855027   14160 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:23:11.855027   14160 kubeadm.go:602] duration metric: took 117.3468ms to restartPrimaryControlPlane
	I1212 21:23:11.855027   14160 kubeadm.go:403] duration metric: took 169.0394ms to StartCluster
	I1212 21:23:11.855027   14160 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.855027   14160 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:23:11.856184   14160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:23:11.856963   14160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:23:11.856963   14160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:23:11.856963   14160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 config.go:182] Loaded profile config "default-k8s-diff-port-124600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 21:23:11.856963   14160 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-124600"
	W1212 21:23:11.857487   14160 addons.go:248] addon metrics-server should already be in state true
	I1212 21:23:11.857567   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-124600"
	I1212 21:23:11.856963   14160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.857598   14160 addons.go:248] addon storage-provisioner should already be in state true
	W1212 21:23:11.857598   14160 addons.go:248] addon dashboard should already be in state true
	I1212 21:23:11.857767   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.857819   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.863101   14160 out.go:179] * Verifying Kubernetes components...
	I1212 21:23:11.866976   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.868177   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870310   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.870461   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.871764   14160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:23:11.932064   14160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:23:11.934081   14160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:23:11.942073   14160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:11.942073   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:23:11.944073   14160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:23:11.945075   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.947064   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:23:11.947064   14160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:23:11.951064   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:11.953072   14160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-124600"
	W1212 21:23:11.953072   14160 addons.go:248] addon default-storageclass should already be in state true
	I1212 21:23:11.953072   14160 host.go:66] Checking if "default-k8s-diff-port-124600" exists ...
	I1212 21:23:11.962072   14160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-124600 --format={{.State.Status}}
	I1212 21:23:11.977069   14160 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:23:11.983067   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:23:11.983067   14160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:23:11.988070   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.005074   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.009067   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.019066   14160 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.019066   14160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:23:12.022067   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.046070   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.073066   14160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62677 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-124600\id_rsa Username:docker}
	I1212 21:23:12.092898   14160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:23:12.116354   14160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-124600
	I1212 21:23:12.165367   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:23:12.165367   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:23:12.167359   14160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:12.169365   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:12.186351   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:23:12.186351   14160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:23:12.204423   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:23:12.204423   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:23:12.207045   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:23:12.207045   14160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:23:12.230521   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:23:12.230521   14160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:23:12.231517   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:23:12.231517   14160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:23:12.233521   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 21:23:12.305253   14160 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.305253   14160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1212 21:23:12.386842   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.386922   14160 retry.go:31] will retry after 278.156141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.390854   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:23:12.390854   14160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:23:12.400109   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:12.415390   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:23:12.415480   14160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:23:12.491717   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:23:12.491717   14160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:23:12.492530   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.492530   14160 retry.go:31] will retry after 256.197463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.512893   14160 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:12.512893   14160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:23:12.538803   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:12.551683   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.551683   14160 retry.go:31] will retry after 265.384209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:12.644080   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.644080   14160 retry.go:31] will retry after 354.535598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.669419   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:12.752922   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.752922   14160 retry.go:31] will retry after 290.803282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.753921   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:12.823384   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1212 21:23:12.917382   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:12.917460   14160 retry.go:31] will retry after 300.691587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.004960   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:23:13.048937   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:23:13.093941   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.094016   14160 retry.go:31] will retry after 506.158576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.223508   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:23:13.387160   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387360   14160 retry.go:31] will retry after 272.283438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.387397   14160 retry.go:31] will retry after 368.00551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.607806   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:23:13.665164   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:23:13.697618   14160 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.698562   14160 retry.go:31] will retry after 669.122462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:23:13.760538   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:23:14.372987   14160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:23:17.195846   14160 node_ready.go:49] node "default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:17.195955   14160 node_ready.go:38] duration metric: took 5.028515s for node "default-k8s-diff-port-124600" to be "Ready" ...
	I1212 21:23:17.195955   14160 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:23:17.200813   14160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:23:20.596132   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.9882139s)
	I1212 21:23:20.596672   14160 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-124600"
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.3422701s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.2468976s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.8066778s)
	I1212 21:23:21.007551   14160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.634458s)
	I1212 21:23:21.007551   14160 api_server.go:72] duration metric: took 9.150442s to wait for apiserver process to appear ...
	I1212 21:23:21.007551   14160 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:23:21.007551   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.010167   14160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-124600 addons enable metrics-server
	
	I1212 21:23:21.099582   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.100442   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:21.196982   14160 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1212 21:23:21.200574   14160 addons.go:530] duration metric: took 9.3434618s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1212 21:23:21.508465   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:21.591838   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:21.591838   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.008494   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.019209   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:23:22.019209   14160 api_server.go:103] status: https://127.0.0.1:62681/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:23:22.507999   14160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62681/healthz ...
	I1212 21:23:22.600220   14160 api_server.go:279] https://127.0.0.1:62681/healthz returned 200:
	ok
	I1212 21:23:22.604091   14160 api_server.go:141] control plane version: v1.34.2
	I1212 21:23:22.604864   14160 api_server.go:131] duration metric: took 1.5972868s to wait for apiserver health ...
	I1212 21:23:22.604864   14160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:23:22.612203   14160 system_pods.go:59] 8 kube-system pods found
	I1212 21:23:22.612251   14160 system_pods.go:61] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.612251   14160 system_pods.go:61] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.612251   14160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.612251   14160 system_pods.go:61] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.612251   14160 system_pods.go:61] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.612251   14160 system_pods.go:74] duration metric: took 7.3871ms to wait for pod list to return data ...
	I1212 21:23:22.612251   14160 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:23:22.616756   14160 default_sa.go:45] found service account: "default"
	I1212 21:23:22.616756   14160 default_sa.go:55] duration metric: took 4.5056ms for default service account to be created ...
	I1212 21:23:22.616756   14160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:23:22.695042   14160 system_pods.go:86] 8 kube-system pods found
	I1212 21:23:22.695042   14160 system_pods.go:89] "coredns-66bc5c9577-r7gwt" [979e9392-cb57-473f-a182-4c5303f99ccb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:23:22.695105   14160 system_pods.go:89] "etcd-default-k8s-diff-port-124600" [e50ec1f7-1f83-4869-a84c-952b0e33f049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-124600" [059ec5f7-88a0-4be1-97fb-6d5c79ea4d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-124600" [dbf61887-d9e4-47f1-9d4e-ca99ead26629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-proxy-2pvfg" [2f8eb4c0-64c0-4637-aec2-844cef61dbe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:23:22.695105   14160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-124600" [f9ed9e8e-2047-4e30-9bf9-ab3bbf3a89e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:23:22.695105   14160 system_pods.go:89] "metrics-server-746fcd58dc-tqqxz" [592c1b3b-eb79-46ea-a4fd-f44074836d45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:23:22.695168   14160 system_pods.go:89] "storage-provisioner" [78048666-1799-4536-8b49-91d324f75325] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:23:22.695168   14160 system_pods.go:126] duration metric: took 78.4107ms to wait for k8s-apps to be running ...
	I1212 21:23:22.695198   14160 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:23:22.700468   14160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:22.725193   14160 system_svc.go:56] duration metric: took 29.0191ms WaitForService to wait for kubelet
	I1212 21:23:22.725193   14160 kubeadm.go:587] duration metric: took 10.868056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:23:22.725193   14160 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:23:22.732161   14160 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1212 21:23:22.732201   14160 node_conditions.go:123] node cpu capacity is 16
	I1212 21:23:22.732201   14160 node_conditions.go:105] duration metric: took 7.0085ms to run NodePressure ...
	I1212 21:23:22.732201   14160 start.go:242] waiting for startup goroutines ...
	I1212 21:23:22.732201   14160 start.go:247] waiting for cluster config update ...
	I1212 21:23:22.732201   14160 start.go:256] writing updated cluster config ...
	I1212 21:23:22.737899   14160 ssh_runner.go:195] Run: rm -f paused
	I1212 21:23:22.745044   14160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:22.751178   14160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:23:24.761658   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:26.763298   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:29.260393   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:31.262454   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:33.762195   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:35.762487   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:39.113069   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:41.263268   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	W1212 21:23:43.269341   14160 pod_ready.go:104] pod "coredns-66bc5c9577-r7gwt" is not "Ready", error: <nil>
	I1212 21:23:44.762609   14160 pod_ready.go:94] pod "coredns-66bc5c9577-r7gwt" is "Ready"
	I1212 21:23:44.762609   14160 pod_ready.go:86] duration metric: took 22.0110788s for pod "coredns-66bc5c9577-r7gwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.767351   14160 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.774353   14160 pod_ready.go:94] pod "etcd-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.774353   14160 pod_ready.go:86] duration metric: took 7.0013ms for pod "etcd-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.779541   14160 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.786861   14160 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.786861   14160 pod_ready.go:86] duration metric: took 7.3192ms for pod "kube-apiserver-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.790455   14160 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:44.958511   14160 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:44.958599   14160 pod_ready.go:86] duration metric: took 168.1411ms for pod "kube-controller-manager-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.158399   14160 pod_ready.go:83] waiting for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.557624   14160 pod_ready.go:94] pod "kube-proxy-2pvfg" is "Ready"
	I1212 21:23:45.557624   14160 pod_ready.go:86] duration metric: took 399.2187ms for pod "kube-proxy-2pvfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:45.758026   14160 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.157650   14160 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-124600" is "Ready"
	I1212 21:23:46.158249   14160 pod_ready.go:86] duration metric: took 400.1515ms for pod "kube-scheduler-default-k8s-diff-port-124600" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:23:46.158249   14160 pod_ready.go:40] duration metric: took 23.4127353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:23:46.259466   14160 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 21:23:46.263937   14160 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-124600" cluster and "default" namespace by default
	I1212 21:23:57.490599   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:23:57.490599   11500 kubeadm.go:319] 
	I1212 21:23:57.490599   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:23:57.495885   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:23:57.496001   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:23:57.497139   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:23:57.497139   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:23:57.497139   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:23:57.497669   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:23:57.497746   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:23:57.498271   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:23:57.498345   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:23:57.498900   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:23:57.499450   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:23:57.499613   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:23:57.499682   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:23:57.499716   11500 kubeadm.go:319] OS: Linux
	I1212 21:23:57.499716   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:23:57.500238   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:23:57.500273   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:23:57.500863   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:23:57.501070   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:23:57.501182   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:23:57.501182   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:23:57.504498   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:23:57.505131   11500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:23:57.505777   11500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:23:57.506311   11500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:23:57.506429   11500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:23:57.506990   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:23:57.506990   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:23:57.510650   11500 out.go:252]   - Booting up control plane ...
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:23:57.510650   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:23:57.511664   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:23:57.511664   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:23:57.512655   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000951132s
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	I1212 21:23:57.512655   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:23:57.512655   11500 kubeadm.go:319] 
	W1212 21:23:57.513649   11500 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-285600] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000951132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:23:57.516687   11500 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:23:57.973632   11500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:23:58.000358   11500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:23:58.005518   11500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:23:58.022197   11500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:23:58.022197   11500 kubeadm.go:158] found existing configuration files:
	
	I1212 21:23:58.026872   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:23:58.039115   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:23:58.043123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:23:58.060114   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:23:58.073122   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:23:58.076119   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:23:58.092125   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.107123   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:23:58.112123   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:23:58.132133   11500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:23:58.145128   11500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:23:58.149118   11500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:23:58.165115   11500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:23:58.280707   11500 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:23:58.378404   11500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:23:58.484549   11500 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:26:50.572138    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:26:50.572138    3280 kubeadm.go:319] 
	I1212 21:26:50.572138    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:26:50.576372    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:26:50.576562    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:26:50.576743    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:26:50.576743    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:26:50.576743    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:26:50.577278    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:26:50.577527    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:26:50.578180    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:26:50.578223    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:26:50.578753    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:26:50.578857    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:26:50.579009    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:26:50.579109    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:26:50.579235    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:26:50.579500    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:26:50.579604    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:26:50.579832    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:26:50.579931    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] OS: Linux
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:26:50.580029    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:26:50.580562    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:26:50.580709    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:26:50.580788    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:26:50.580931    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:26:50.580972    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:26:50.581495    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:26:50.581626    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:26:50.585055    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:26:50.585055    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 21:26:50.585694    3280 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 21:26:50.586227    3280 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586357    3280 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 21:26:50.586417    3280 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 21:26:50.587005    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:26:50.587068    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:26:50.587734    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:26:50.587927    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:26:50.590646    3280 out.go:252]   - Booting up control plane ...
	I1212 21:26:50.591259    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:26:50.591837    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:26:50.592415    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:26:50.592415    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001153116s
	I1212 21:26:50.592415    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	I1212 21:26:50.593382    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:26:50.593382    3280 kubeadm.go:319] 
	W1212 21:26:50.593382    3280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-449900] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001153116s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:26:50.597384    3280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1212 21:26:51.058393    3280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:26:51.077528    3280 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:26:51.081780    3280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:26:51.095285    3280 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:26:51.095342    3280 kubeadm.go:158] found existing configuration files:
	
	I1212 21:26:51.100877    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:26:51.114399    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:26:51.119274    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:26:51.137891    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:26:51.152853    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:26:51.157180    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:26:51.176783    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.190524    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:26:51.194597    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:26:51.212488    3280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:26:51.228065    3280 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:26:51.232039    3280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:26:51.250057    3280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:26:51.372297    3280 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1212 21:26:51.461499    3280 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:26:51.553708    3280 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:27:59.635671   11500 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:27:59.635671   11500 kubeadm.go:319] 
	I1212 21:27:59.636285   11500 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:27:59.640685   11500 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:27:59.640685   11500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:27:59.641210   11500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:27:59.641454   11500 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:27:59.641454   11500 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:27:59.642159   11500 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:27:59.642187   11500 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:27:59.642718   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:27:59.642918   11500 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:27:59.643104   11500 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:27:59.643295   11500 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:27:59.643935   11500 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:27:59.643987   11500 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:27:59.644635   11500 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:27:59.644733   11500 kubeadm.go:319] OS: Linux
	I1212 21:27:59.644880   11500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:27:59.645003   11500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:27:59.645114   11500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:27:59.645225   11500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:27:59.645248   11500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:27:59.645998   11500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:27:59.646240   11500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:27:59.646401   11500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:27:59.649353   11500 out.go:252]   - Generating certificates and keys ...
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:27:59.649353   11500 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:27:59.649996   11500 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:27:59.650580   11500 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:27:59.650580   11500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:27:59.651191   11500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:27:59.651254   11500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:27:59.651254   11500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:27:59.653668   11500 out.go:252]   - Booting up control plane ...
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:27:59.653940   11500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:27:59.655077   11500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:27:59.655321   11500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:27:59.655492   11500 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00060482s
	I1212 21:27:59.655492   11500 kubeadm.go:319] 
	I1212 21:27:59.655630   11500 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:27:59.655630   11500 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:27:59.655821   11500 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:27:59.655821   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:27:59.656041   11500 kubeadm.go:319] 
	I1212 21:27:59.656041   11500 kubeadm.go:403] duration metric: took 8m4.8179078s to StartCluster
	I1212 21:27:59.656041   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:27:59.659651   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:27:59.720934   11500 cri.go:89] found id: ""
	I1212 21:27:59.720934   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.720934   11500 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:27:59.720934   11500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:27:59.725183   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:27:59.766585   11500 cri.go:89] found id: ""
	I1212 21:27:59.766585   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.766585   11500 logs.go:284] No container was found matching "etcd"
	I1212 21:27:59.766585   11500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:27:59.771623   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:27:59.811981   11500 cri.go:89] found id: ""
	I1212 21:27:59.811981   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.811981   11500 logs.go:284] No container was found matching "coredns"
	I1212 21:27:59.811981   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:27:59.817402   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:27:59.863867   11500 cri.go:89] found id: ""
	I1212 21:27:59.863867   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.863867   11500 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:27:59.863867   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:27:59.874092   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:27:59.916790   11500 cri.go:89] found id: ""
	I1212 21:27:59.916790   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.916790   11500 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:27:59.916790   11500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:27:59.921036   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:27:59.972193   11500 cri.go:89] found id: ""
	I1212 21:27:59.972193   11500 logs.go:282] 0 containers: []
	W1212 21:27:59.972193   11500 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:27:59.972193   11500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:27:59.976673   11500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:28:00.020419   11500 cri.go:89] found id: ""
	I1212 21:28:00.020419   11500 logs.go:282] 0 containers: []
	W1212 21:28:00.020419   11500 logs.go:284] No container was found matching "kindnet"
	I1212 21:28:00.020419   11500 logs.go:123] Gathering logs for container status ...
	I1212 21:28:00.020419   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:28:00.075393   11500 logs.go:123] Gathering logs for kubelet ...
	I1212 21:28:00.075393   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:28:00.136556   11500 logs.go:123] Gathering logs for dmesg ...
	I1212 21:28:00.136556   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:28:00.180601   11500 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:28:00.180601   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:28:00.264769   11500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:28:00.257747   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.258889   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.260305   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.261725   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:28:00.262897   10846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:28:00.264769   11500 logs.go:123] Gathering logs for Docker ...
	I1212 21:28:00.264769   11500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:28:00.295184   11500 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:28:00.295286   11500 out.go:285] * 
	W1212 21:28:00.295361   11500 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.295361   11500 out.go:285] * 
	W1212 21:28:00.297172   11500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:28:00.306876   11500 out.go:203] 
	W1212 21:28:00.310659   11500 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00060482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:28:00.310880   11500 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:28:00.310880   11500 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:28:00.312599   11500 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896422880Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896514789Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896525790Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896530891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896538492Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896562994Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:19:27 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:27.896607799Z" level=info msg="Initializing buildkit"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.063364015Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070100507Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070204618Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070271524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 dockerd[1173]: time="2025-12-12T21:19:28.070381736Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:19:28 no-preload-285600 cri-dockerd[1463]: time="2025-12-12T21:19:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:19:28 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:30:07.686611   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:07.688121   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:07.689351   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:07.692698   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:07.693875   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec12 21:23] CPU: 13 PID: 434005 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f45063b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f45063b9af6.
	[  +0.000001] RSP: 002b:00007fffb2f7a7b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.884221] CPU: 10 PID: 434152 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1ab5b6bb20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f1ab5b6baf6.
	[  +0.000001] RSP: 002b:00007fffe51bbd80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +3.005046] tmpfs: Unknown parameter 'noswap'
	[Dec12 21:24] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:30:07 up  2:31,  0 user,  load average: 0.41, 1.92, 3.30
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:30:04 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:05 no-preload-285600 kubelet[13616]: E1212 21:30:05.109318   13616 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:05 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:05 no-preload-285600 kubelet[13639]: E1212 21:30:05.872025   13639 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:05 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:06 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 12 21:30:06 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:06 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:06 no-preload-285600 kubelet[13660]: E1212 21:30:06.637182   13660 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:06 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:06 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:30:07 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 12 21:30:07 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:07 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:30:07 no-preload-285600 kubelet[13700]: E1212 21:30:07.377126   13700 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:30:07 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:30:07 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 6 (600.1389ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:30:08.828567    9388 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (119.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (377.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1212 21:30:13.435579   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:18.035105   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:18.360878   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:22.826185   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:31.906276   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:33.649190   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:38.450354   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:50.536282   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m13.9186694s)

                                                
                                                
-- stdout --
	* [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:30:11.311431   13804 out.go:360] Setting OutFile to fd 2028 ...
	I1212 21:30:11.366494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.367494   13804 out.go:374] Setting ErrFile to fd 840...
	I1212 21:30:11.367494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.380496   13804 out.go:368] Setting JSON to false
	I1212 21:30:11.382494   13804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9149,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:30:11.382494   13804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:30:11.386494   13804 out.go:179] * [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:30:11.389494   13804 notify.go:221] Checking for updates...
	I1212 21:30:11.390508   13804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:11.393495   13804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:30:11.395506   13804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:30:11.398496   13804 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:30:11.400504   13804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:30:11.403497   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:11.405494   13804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:30:11.518260   13804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:30:11.522047   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:11.753278   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:11.731465297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:11.756457   13804 out.go:179] * Using the docker driver based on existing profile
	I1212 21:30:11.760219   13804 start.go:309] selected driver: docker
	I1212 21:30:11.760257   13804 start.go:927] validating driver "docker" against &{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:11.760327   13804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:30:11.846740   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:12.077144   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:12.058111571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:12.077698   13804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:30:12.077698   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:12.077698   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:12.077698   13804 start.go:353] cluster config:
	{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:12.080814   13804 out.go:179] * Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	I1212 21:30:12.083912   13804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:30:12.086321   13804 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:30:12.089654   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:12.089654   13804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:30:12.089654   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:30:12.353137   13804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:30:12.353137   13804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:30:12.353137   13804 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:30:12.353137   13804 start.go:360] acquireMachinesLock for no-preload-285600: {Name:mk2731f875a3a62f76017c58cc7d43a1bb1f8ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:12.353137   13804 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-285600"
	I1212 21:30:12.353137   13804 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:30:12.353684   13804 fix.go:54] fixHost starting: 
	I1212 21:30:12.365514   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:12.437166   13804 fix.go:112] recreateIfNeeded on no-preload-285600: state=Stopped err=<nil>
	W1212 21:30:12.437166   13804 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:30:12.443159   13804 out.go:252] * Restarting existing docker container for "no-preload-285600" ...
	I1212 21:30:12.448159   13804 cli_runner.go:164] Run: docker start no-preload-285600
	I1212 21:30:13.953419   13804 cli_runner.go:217] Completed: docker start no-preload-285600: (1.5052355s)
	I1212 21:30:13.960859   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:14.031860   13804 kic.go:430] container "no-preload-285600" state is running.
	I1212 21:30:14.039849   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:14.112858   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:14.114845   13804 machine.go:94] provisionDockerMachine start ...
	I1212 21:30:14.119854   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:14.192854   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:14.193857   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:14.193857   13804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:30:14.195874   13804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:30:14.957274   13804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.957533   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1212 21:30:14.957533   13804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.866838s
	I1212 21:30:14.957533   13804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1212 21:30:14.963183   13804 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.963323   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1212 21:30:14.963323   13804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8726277s
	I1212 21:30:14.963323   13804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.8736432s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.8746379s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 21:30:14.995149   13804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.995149   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1212 21:30:14.995149   13804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9054481s
	I1212 21:30:14.995149   13804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1212 21:30:15.001398   13804 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.001398   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1212 21:30:15.001398   13804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9116969s
	I1212 21:30:15.001398   13804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 21:30:15.006281   13804 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.006281   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1212 21:30:15.006978   13804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9162031s
	I1212 21:30:15.006978   13804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.039446   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1212 21:30:15.039446   13804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9497439s
	I1212 21:30:15.039446   13804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:87] Successfully saved all images to host disk.
	I1212 21:30:17.371371   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.371371   13804 ubuntu.go:182] provisioning hostname "no-preload-285600"
	I1212 21:30:17.374694   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.431417   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.431417   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.431417   13804 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-285600 && echo "no-preload-285600" | sudo tee /etc/hostname
	I1212 21:30:17.615567   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.620003   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.675055   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.675719   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.675719   13804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:30:17.863046   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:17.863046   13804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:30:17.863046   13804 ubuntu.go:190] setting up certificates
	I1212 21:30:17.863579   13804 provision.go:84] configureAuth start
	I1212 21:30:17.867203   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:17.921910   13804 provision.go:143] copyHostCerts
	I1212 21:30:17.921910   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:30:17.921910   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:30:17.922850   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:30:17.923414   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:30:17.923414   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:30:17.923977   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:30:17.924758   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:30:17.924758   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:30:17.924916   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:30:17.925647   13804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-285600 san=[127.0.0.1 192.168.121.2 localhost minikube no-preload-285600]
	I1212 21:30:17.969098   13804 provision.go:177] copyRemoteCerts
	I1212 21:30:17.972961   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:30:17.975732   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.033900   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:18.156529   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:30:18.190271   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:30:18.219028   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:30:18.247371   13804 provision.go:87] duration metric: took 383.7852ms to configureAuth
	I1212 21:30:18.247371   13804 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:30:18.248196   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:18.253065   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.307356   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.308437   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.308437   13804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:30:18.484387   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:30:18.484387   13804 ubuntu.go:71] root file system type: overlay
	I1212 21:30:18.484387   13804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:30:18.488431   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.543927   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.544057   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.544057   13804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:30:18.725295   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:30:18.729293   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.786383   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.787353   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.787415   13804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:30:18.969169   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:18.969169   13804 machine.go:97] duration metric: took 4.8542454s to provisionDockerMachine
	I1212 21:30:18.969169   13804 start.go:293] postStartSetup for "no-preload-285600" (driver="docker")
	I1212 21:30:18.969169   13804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:30:18.973559   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:30:18.977516   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.030405   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.165106   13804 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:30:19.173383   13804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:30:19.173383   13804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:30:19.174601   13804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:30:19.179034   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:30:19.191703   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:30:19.218878   13804 start.go:296] duration metric: took 249.7055ms for postStartSetup
	I1212 21:30:19.224011   13804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:30:19.227131   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.279470   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.406985   13804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:30:19.415865   13804 fix.go:56] duration metric: took 7.062067s for fixHost
	I1212 21:30:19.415865   13804 start.go:83] releasing machines lock for "no-preload-285600", held for 7.0626137s
	I1212 21:30:19.419613   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:19.476904   13804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:30:19.481453   13804 ssh_runner.go:195] Run: cat /version.json
	I1212 21:30:19.481484   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.483912   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.536799   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.547561   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	W1212 21:30:19.661665   13804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:30:19.667210   13804 ssh_runner.go:195] Run: systemctl --version
	I1212 21:30:19.682255   13804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:30:19.691854   13804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:30:19.696344   13804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:30:19.710554   13804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:30:19.710554   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:19.710554   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:19.710554   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:19.738854   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:30:19.758305   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1212 21:30:19.763550   13804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:30:19.763550   13804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:30:19.778518   13804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:30:19.782511   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:30:19.803423   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.823199   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:30:19.842875   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.861015   13804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:30:19.878016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:30:19.896016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:30:19.917384   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:30:19.937797   13804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:30:19.955074   13804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:30:19.974670   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:20.125841   13804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:30:20.307940   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:20.307940   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:20.312305   13804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:30:20.338880   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.361799   13804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:30:20.425840   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.448078   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:30:20.466273   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:20.493401   13804 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:30:20.505640   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:30:20.517978   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:30:20.546077   13804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:30:20.685945   13804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:30:20.820797   13804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:30:20.820797   13804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:30:20.846868   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:30:20.870150   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:21.006241   13804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:30:21.847456   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:30:21.870131   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:30:21.892265   13804 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:30:21.918146   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:21.940975   13804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:30:22.091526   13804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:30:22.237813   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.375430   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:30:22.400803   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:30:22.424619   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.577023   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:30:22.684499   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:22.703199   13804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:30:22.707457   13804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:30:22.717003   13804 start.go:564] Will wait 60s for crictl version
	I1212 21:30:22.722114   13804 ssh_runner.go:195] Run: which crictl
	I1212 21:30:22.736201   13804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:30:22.783830   13804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:30:22.787385   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.831267   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.876285   13804 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:30:22.880058   13804 cli_runner.go:164] Run: docker exec -t no-preload-285600 dig +short host.docker.internal
	I1212 21:30:23.014334   13804 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:30:23.019335   13804 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:30:23.026955   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.046973   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:23.103000   13804 kubeadm.go:884] updating cluster {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:30:23.103289   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:23.108430   13804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:30:23.145267   13804 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:30:23.145267   13804 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:30:23.145267   13804 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:30:23.145794   13804 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:30:23.149307   13804 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:30:23.218275   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:23.218275   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:23.218275   13804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:30:23.218275   13804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-285600 NodeName:no-preload-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:30:23.218275   13804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-285600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:30:23.224071   13804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:30:23.236229   13804 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:30:23.240995   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:30:23.253852   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1212 21:30:23.272662   13804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:30:23.293961   13804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 21:30:23.318313   13804 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:30:23.325082   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.346396   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:23.486209   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:23.509994   13804 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600 for IP: 192.168.121.2
	I1212 21:30:23.509994   13804 certs.go:195] generating shared ca certs ...
	I1212 21:30:23.509994   13804 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:30:23.510778   13804 certs.go:257] generating profile certs ...
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6
	I1212 21:30:23.512294   13804 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key
	I1212 21:30:23.513282   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:30:23.513306   13804 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:30:23.513825   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:30:23.516015   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:30:23.543721   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:30:23.570887   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:30:23.599906   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:30:23.628308   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:30:23.655194   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:30:23.680557   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:30:23.709445   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:30:23.735490   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:30:23.763952   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:30:23.788819   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:30:23.817493   13804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:30:23.843244   13804 ssh_runner.go:195] Run: openssl version
	I1212 21:30:23.857029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.875085   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:30:23.894989   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.903335   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.907817   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.954829   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:30:23.973758   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:30:23.992281   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:30:24.012825   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.021794   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.027262   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.076227   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:30:24.097029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.114364   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:30:24.131237   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.139762   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.144290   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.195500   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:30:24.213100   13804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:30:24.224086   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:30:24.274630   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:30:24.322795   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:30:24.371721   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:30:24.422510   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:30:24.475266   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:30:24.519671   13804 kubeadm.go:401] StartCluster: {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:24.524264   13804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:30:24.559622   13804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:30:24.571455   13804 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:30:24.571455   13804 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:30:24.576936   13804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:30:24.591763   13804 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:30:24.596129   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.651902   13804 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.652253   13804 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-285600" cluster setting kubeconfig missing "no-preload-285600" context setting]
	I1212 21:30:24.652697   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.674806   13804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:30:24.692277   13804 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:30:24.692277   13804 kubeadm.go:602] duration metric: took 120.82ms to restartPrimaryControlPlane
	I1212 21:30:24.692277   13804 kubeadm.go:403] duration metric: took 172.6933ms to StartCluster
	I1212 21:30:24.692277   13804 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.692277   13804 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.693507   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.694169   13804 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:30:24.694169   13804 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:30:24.694746   13804 addons.go:70] Setting storage-provisioner=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:70] Setting dashboard=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon storage-provisioner=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:24.694746   13804 addons.go:70] Setting default-storageclass=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon dashboard=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-285600"
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	W1212 21:30:24.694746   13804 addons.go:248] addon dashboard should already be in state true
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.698139   13804 out.go:179] * Verifying Kubernetes components...
	I1212 21:30:24.704555   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.705748   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:24.762431   13804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:30:24.762431   13804 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:30:24.764424   13804 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.764424   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:30:24.767454   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.767454   13804 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:30:24.769433   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:30:24.769433   13804 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:30:24.773442   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.780427   13804 addons.go:239] Setting addon default-storageclass=true in "no-preload-285600"
	I1212 21:30:24.780427   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.787430   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.820427   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.826439   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.837426   13804 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:24.837426   13804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:30:24.840425   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.872429   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:24.893413   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.963677   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:30:24.963677   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:30:24.967679   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.982575   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:30:24.982575   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:30:25.004580   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:30:25.004580   13804 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:30:25.025729   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:30:25.025729   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:30:25.051800   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:25.053624   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:25.061392   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:30:25.061392   13804 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:30:25.072688   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.072688   13804 retry.go:31] will retry after 158.823977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.110005   13804 node_ready.go:35] waiting up to 6m0s for node "no-preload-285600" to be "Ready" ...
	I1212 21:30:25.146675   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:30:25.146675   13804 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:30:25.168917   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:30:25.168917   13804 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:30:25.190262   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:30:25.190262   13804 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 21:30:25.237134   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.255181   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.255181   13804 retry.go:31] will retry after 222.613203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.258203   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:25.258203   13804 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:30:25.281581   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.360910   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.360910   13804 retry.go:31] will retry after 528.174411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:25.396771   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.396771   13804 retry.go:31] will retry after 334.337457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.483899   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:25.562673   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.562738   13804 retry.go:31] will retry after 526.924446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.736852   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.814449   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.814449   13804 retry.go:31] will retry after 242.822318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.895040   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.976722   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.976722   13804 retry.go:31] will retry after 649.835265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.062555   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:26.094920   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:26.173577   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.173577   13804 retry.go:31] will retry after 303.723342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:26.206503   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.206503   13804 retry.go:31] will retry after 711.474393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.482577   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:26.584453   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.584453   13804 retry.go:31] will retry after 1.214394493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.632132   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:26.707550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.707577   13804 retry.go:31] will retry after 679.917817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.923400   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:27.004405   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.004405   13804 retry.go:31] will retry after 921.431314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.393372   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:27.464948   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.464948   13804 retry.go:31] will retry after 1.86941024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.806617   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:27.880154   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.880250   13804 retry.go:31] will retry after 870.607292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.930624   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:28.010568   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.010568   13804 retry.go:31] will retry after 1.688030068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.756973   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:28.854322   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.854322   13804 retry.go:31] will retry after 1.72717743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.339399   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:29.418550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.418550   13804 retry.go:31] will retry after 2.160026616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.704224   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:29.784607   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.784607   13804 retry.go:31] will retry after 1.396897779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.585867   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:30.664243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.664314   13804 retry.go:31] will retry after 3.060722722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.188925   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:31.270881   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.270881   13804 retry.go:31] will retry after 3.544218054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.584146   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:31.661710   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.661710   13804 retry.go:31] will retry after 3.805789738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.730718   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:33.815337   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.815337   13804 retry.go:31] will retry after 4.430320375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.819397   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:34.899243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.899243   13804 retry.go:31] will retry after 6.309363077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:35.143657   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:35.473027   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:35.571773   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:35.571773   13804 retry.go:31] will retry after 2.80996556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.250480   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:38.332990   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.332990   13804 retry.go:31] will retry after 8.351867848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.387198   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:38.470982   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.470982   13804 retry.go:31] will retry after 8.954426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.214251   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:41.296230   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.296230   13804 retry.go:31] will retry after 7.46364933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:45.188063   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:46.689378   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:46.780060   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:46.780173   13804 retry.go:31] will retry after 7.773373788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.432175   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:47.509090   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.509090   13804 retry.go:31] will retry after 12.066548893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.765276   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:48.850081   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.850081   13804 retry.go:31] will retry after 11.297010825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:54.559164   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:54.668798   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:54.668798   13804 retry.go:31] will retry after 9.183824945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:55.224252   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:59.581067   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:59.656772   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:59.656772   13804 retry.go:31] will retry after 12.343146112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:00.152832   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:00.258393   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:00.258393   13804 retry.go:31] will retry after 14.175931828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:03.857903   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:31:03.940387   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:03.940495   13804 retry.go:31] will retry after 12.961917726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:05.261194   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:12.006287   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:12.116839   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:12.116839   13804 retry.go:31] will retry after 16.436096416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:14.440602   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:14.524069   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:14.524069   13804 retry.go:31] will retry after 28.643403029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:15.302381   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:16.907534   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:31:16.996503   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:16.996503   13804 retry.go:31] will retry after 47.424716965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:25.338083   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:28.558807   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:28.646531   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:28.646531   13804 retry.go:31] will retry after 25.840068373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:35.377503   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:43.173479   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:43.254885   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:43.254962   13804 retry.go:31] will retry after 19.184111843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:45.418241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:54.493389   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:54.576619   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:54.577177   13804 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 21:31:55.455424   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:02.444841   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:02.530332   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:02.530332   13804 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:32:04.426968   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:04.514916   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:04.514916   13804 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:32:04.517987   13804 out.go:179] * Enabled addons: 
	I1212 21:32:04.521222   13804 addons.go:530] duration metric: took 1m39.8254383s for enable addons: enabled=[]
	W1212 21:32:05.496160   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:15.537579   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:25.576602   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	* 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:30:13.371959374Z",
	            "FinishedAt": "2025-12-12T21:30:09.786882361Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91bcdd83bbb23ae9c67dcec01b8d4c16af48c7f986914ad0290fdd4a6c1ce136",
	            "SandboxKey": "/var/run/docker/netns/91bcdd83bbb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62840"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62841"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62842"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "a19528b5ba1e129df46a773b4e6c518e041141c1355dc620986fcd6472d55808",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 2 (575.745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.3538065s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732391828Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732480039Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732490940Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732497041Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732552048Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732584552Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732619056Z" level=info msg="Initializing buildkit"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.834443812Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839552952Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839689269Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839754977Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839713872Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:27.324049    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:27.325627    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:27.327224    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:27.328169    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:27.330126    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:36:27 up  2:38,  0 user,  load average: 1.24, 1.18, 2.47
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:36:23 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:36:24 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 12 21:36:24 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:24 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:24 no-preload-285600 kubelet[7845]: E1212 21:36:24.572968    7845 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:36:24 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:36:24 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:25 no-preload-285600 kubelet[7856]: E1212 21:36:25.349242    7856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:36:25 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:36:25 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:25 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:26 no-preload-285600 kubelet[7883]: E1212 21:36:26.082426    7883 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:36:26 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:36:26 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:36:26 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 12 21:36:26 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:26 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:36:26 no-preload-285600 kubelet[7899]: E1212 21:36:26.828498    7899 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:36:26 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:36:26 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 2 (589.3086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (377.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (91.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 21:31:01.355549   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:31:41.123597   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:15.013259   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.0265863s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_7.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-449900
helpers_test.go:244: (dbg) docker inspect newest-cni-449900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a",
	        "Created": "2025-12-12T21:22:35.195234972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:22:35.488144172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hosts",
	        "LogPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a-json.log",
	        "Name": "/newest-cni-449900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-449900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-449900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-449900",
	                "Source": "/var/lib/docker/volumes/newest-cni-449900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-449900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-449900",
	                "name.minikube.sigs.k8s.io": "newest-cni-449900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fde89981b6eb4ca746a1211ab1fbe1f31940a2b31e5100a41e3540a20fc35851",
	            "SandboxKey": "/var/run/docker/netns/fde89981b6eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62608"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62609"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62610"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-449900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bcedcac448e9e1d98fcddd7097fe310c50b6a637d5f23ebf519e961f822823ab",
	                    "EndpointID": "7f3443bddde4dd45dcc425732d5708cf2a5e19f01ca0bcdde4511a4d59f9587d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-449900",
	                        "8fae8198a0e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 6 (582.6225ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:32:25.428687   14140 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25: (1.1150496s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-246400 image list --format=json                                                                                                                                                                            │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p old-k8s-version-246400 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:30:11
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:30:11.311431   13804 out.go:360] Setting OutFile to fd 2028 ...
	I1212 21:30:11.366494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.367494   13804 out.go:374] Setting ErrFile to fd 840...
	I1212 21:30:11.367494   13804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:11.380496   13804 out.go:368] Setting JSON to false
	I1212 21:30:11.382494   13804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9149,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:30:11.382494   13804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:30:11.386494   13804 out.go:179] * [no-preload-285600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:30:11.389494   13804 notify.go:221] Checking for updates...
	I1212 21:30:11.390508   13804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:11.393495   13804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:30:11.395506   13804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:30:11.398496   13804 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:30:11.400504   13804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:30:11.403497   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:11.405494   13804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:30:11.518260   13804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:30:11.522047   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:11.753278   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:11.731465297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:11.756457   13804 out.go:179] * Using the docker driver based on existing profile
	I1212 21:30:11.760219   13804 start.go:309] selected driver: docker
	I1212 21:30:11.760257   13804 start.go:927] validating driver "docker" against &{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:11.760327   13804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:30:11.846740   13804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:30:12.077144   13804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:30:12.058111571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:30:12.077698   13804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:30:12.077698   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:12.077698   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:12.077698   13804 start.go:353] cluster config:
	{Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:12.080814   13804 out.go:179] * Starting "no-preload-285600" primary control-plane node in "no-preload-285600" cluster
	I1212 21:30:12.083912   13804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:30:12.086321   13804 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:30:12.089654   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:12.089654   13804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:30:12.089654   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1212 21:30:12.089654   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1212 21:30:12.090649   13804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1212 21:30:12.353137   13804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:30:12.353137   13804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:30:12.353137   13804 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:30:12.353137   13804 start.go:360] acquireMachinesLock for no-preload-285600: {Name:mk2731f875a3a62f76017c58cc7d43a1bb1f8ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:12.353137   13804 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-285600"
	I1212 21:30:12.353137   13804 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:30:12.353684   13804 fix.go:54] fixHost starting: 
	I1212 21:30:12.365514   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:12.437166   13804 fix.go:112] recreateIfNeeded on no-preload-285600: state=Stopped err=<nil>
	W1212 21:30:12.437166   13804 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:30:12.443159   13804 out.go:252] * Restarting existing docker container for "no-preload-285600" ...
	I1212 21:30:12.448159   13804 cli_runner.go:164] Run: docker start no-preload-285600
	I1212 21:30:13.953419   13804 cli_runner.go:217] Completed: docker start no-preload-285600: (1.5052355s)
	I1212 21:30:13.960859   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:14.031860   13804 kic.go:430] container "no-preload-285600" state is running.
	I1212 21:30:14.039849   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:14.112858   13804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\config.json ...
	I1212 21:30:14.114845   13804 machine.go:94] provisionDockerMachine start ...
	I1212 21:30:14.119854   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:14.192854   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:14.193857   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:14.193857   13804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:30:14.195874   13804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:30:14.957274   13804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.957533   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1212 21:30:14.957533   13804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.866838s
	I1212 21:30:14.957533   13804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1212 21:30:14.963183   13804 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.963323   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1212 21:30:14.963323   13804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8726277s
	I1212 21:30:14.963323   13804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.8736432s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 21:30:14.964339   13804 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.964339   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1212 21:30:14.964339   13804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.8746379s
	I1212 21:30:14.964339   13804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 21:30:14.995149   13804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:14.995149   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1212 21:30:14.995149   13804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9054481s
	I1212 21:30:14.995149   13804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1212 21:30:15.001398   13804 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.001398   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1212 21:30:15.001398   13804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9116969s
	I1212 21:30:15.001398   13804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 21:30:15.006281   13804 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.006281   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1212 21:30:15.006978   13804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9162031s
	I1212 21:30:15.006978   13804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:15.039446   13804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1212 21:30:15.039446   13804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9497439s
	I1212 21:30:15.039446   13804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1212 21:30:15.039446   13804 cache.go:87] Successfully saved all images to host disk.
	I1212 21:30:17.371371   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.371371   13804 ubuntu.go:182] provisioning hostname "no-preload-285600"
	I1212 21:30:17.374694   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.431417   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.431417   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.431417   13804 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-285600 && echo "no-preload-285600" | sudo tee /etc/hostname
	I1212 21:30:17.615567   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-285600
	
	I1212 21:30:17.620003   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:17.675055   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:17.675719   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:17.675719   13804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:30:17.863046   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:17.863046   13804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:30:17.863046   13804 ubuntu.go:190] setting up certificates
	I1212 21:30:17.863579   13804 provision.go:84] configureAuth start
	I1212 21:30:17.867203   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:17.921910   13804 provision.go:143] copyHostCerts
	I1212 21:30:17.921910   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:30:17.921910   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:30:17.922850   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:30:17.923414   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:30:17.923414   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:30:17.923977   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:30:17.924758   13804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:30:17.924758   13804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:30:17.924916   13804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:30:17.925647   13804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-285600 san=[127.0.0.1 192.168.121.2 localhost minikube no-preload-285600]
	I1212 21:30:17.969098   13804 provision.go:177] copyRemoteCerts
	I1212 21:30:17.972961   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:30:17.975732   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.033900   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:18.156529   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:30:18.190271   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:30:18.219028   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:30:18.247371   13804 provision.go:87] duration metric: took 383.7852ms to configureAuth
	I1212 21:30:18.247371   13804 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:30:18.248196   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:18.253065   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.307356   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.308437   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.308437   13804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:30:18.484387   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:30:18.484387   13804 ubuntu.go:71] root file system type: overlay
	I1212 21:30:18.484387   13804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:30:18.488431   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.543927   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.544057   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.544057   13804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:30:18.725295   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:30:18.729293   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:18.786383   13804 main.go:143] libmachine: Using SSH client type: native
	I1212 21:30:18.787353   13804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 62838 <nil> <nil>}
	I1212 21:30:18.787415   13804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:30:18.969169   13804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:18.969169   13804 machine.go:97] duration metric: took 4.8542454s to provisionDockerMachine
	I1212 21:30:18.969169   13804 start.go:293] postStartSetup for "no-preload-285600" (driver="docker")
	I1212 21:30:18.969169   13804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:30:18.973559   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:30:18.977516   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.030405   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.165106   13804 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:30:19.173383   13804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:30:19.173383   13804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:30:19.173383   13804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:30:19.174601   13804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:30:19.179034   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:30:19.191703   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:30:19.218878   13804 start.go:296] duration metric: took 249.7055ms for postStartSetup
	I1212 21:30:19.224011   13804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:30:19.227131   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.279470   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.406985   13804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:30:19.415865   13804 fix.go:56] duration metric: took 7.062067s for fixHost
	I1212 21:30:19.415865   13804 start.go:83] releasing machines lock for "no-preload-285600", held for 7.0626137s
	I1212 21:30:19.419613   13804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-285600
	I1212 21:30:19.476904   13804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:30:19.481453   13804 ssh_runner.go:195] Run: cat /version.json
	I1212 21:30:19.481484   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.483912   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:19.536799   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:19.547561   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	W1212 21:30:19.661665   13804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:30:19.667210   13804 ssh_runner.go:195] Run: systemctl --version
	I1212 21:30:19.682255   13804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:30:19.691854   13804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:30:19.696344   13804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:30:19.710554   13804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:30:19.710554   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:19.710554   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:19.710554   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:19.738854   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 21:30:19.758305   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1212 21:30:19.763550   13804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:30:19.763550   13804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:30:19.778518   13804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:30:19.782511   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:30:19.803423   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.823199   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:30:19.842875   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:30:19.861015   13804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:30:19.878016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:30:19.896016   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:30:19.917384   13804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:30:19.937797   13804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:30:19.955074   13804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:30:19.974670   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:20.125841   13804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:30:20.307940   13804 start.go:496] detecting cgroup driver to use...
	I1212 21:30:20.307940   13804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:30:20.312305   13804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:30:20.338880   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.361799   13804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:30:20.425840   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:20.448078   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:30:20.466273   13804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:20.493401   13804 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:30:20.505640   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:30:20.517978   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:30:20.546077   13804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:30:20.685945   13804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:30:20.820797   13804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:30:20.820797   13804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:30:20.846868   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:30:20.870150   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:21.006241   13804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:30:21.847456   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:30:21.870131   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:30:21.892265   13804 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:30:21.918146   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:21.940975   13804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:30:22.091526   13804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:30:22.237813   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.375430   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:30:22.400803   13804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:30:22.424619   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:22.577023   13804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:30:22.684499   13804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:30:22.703199   13804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:30:22.707457   13804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:30:22.717003   13804 start.go:564] Will wait 60s for crictl version
	I1212 21:30:22.722114   13804 ssh_runner.go:195] Run: which crictl
	I1212 21:30:22.736201   13804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:30:22.783830   13804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:30:22.787385   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.831267   13804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:30:22.876285   13804 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:30:22.880058   13804 cli_runner.go:164] Run: docker exec -t no-preload-285600 dig +short host.docker.internal
	I1212 21:30:23.014334   13804 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:30:23.019335   13804 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:30:23.026955   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.046973   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:23.103000   13804 kubeadm.go:884] updating cluster {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:30:23.103289   13804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:30:23.108430   13804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:30:23.145267   13804 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:30:23.145267   13804 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:30:23.145267   13804 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:30:23.145794   13804 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:30:23.149307   13804 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:30:23.218275   13804 cni.go:84] Creating CNI manager for ""
	I1212 21:30:23.218275   13804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:30:23.218275   13804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:30:23.218275   13804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-285600 NodeName:no-preload-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:30:23.218275   13804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-285600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:30:23.224071   13804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:30:23.236229   13804 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:30:23.240995   13804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:30:23.253852   13804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1212 21:30:23.272662   13804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:30:23.293961   13804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 21:30:23.318313   13804 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:30:23.325082   13804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:23.346396   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:23.486209   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:23.509994   13804 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600 for IP: 192.168.121.2
	I1212 21:30:23.509994   13804 certs.go:195] generating shared ca certs ...
	I1212 21:30:23.509994   13804 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:30:23.510778   13804 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:30:23.510778   13804 certs.go:257] generating profile certs ...
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\client.key
	I1212 21:30:23.511512   13804 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key.a3b2baf6
	I1212 21:30:23.512294   13804 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key
	I1212 21:30:23.513282   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:30:23.513306   13804 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:30:23.513306   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:30:23.513825   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:30:23.514133   13804 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:30:23.516015   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:30:23.543721   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:30:23.570887   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:30:23.599906   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:30:23.628308   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:30:23.655194   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:30:23.680557   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:30:23.709445   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:30:23.735490   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:30:23.763952   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:30:23.788819   13804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:30:23.817493   13804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:30:23.843244   13804 ssh_runner.go:195] Run: openssl version
	I1212 21:30:23.857029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.875085   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:30:23.894989   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.903335   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.907817   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:23.954829   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:30:23.973758   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:30:23.992281   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:30:24.012825   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.021794   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.027262   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:30:24.076227   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:30:24.097029   13804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.114364   13804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:30:24.131237   13804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.139762   13804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.144290   13804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:30:24.195500   13804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:30:24.213100   13804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:30:24.224086   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:30:24.274630   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:30:24.322795   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:30:24.371721   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:30:24.422510   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:30:24.475266   13804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:30:24.519671   13804 kubeadm.go:401] StartCluster: {Name:no-preload-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:30:24.524264   13804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:30:24.559622   13804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:30:24.571455   13804 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:30:24.571455   13804 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:30:24.576936   13804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:30:24.591763   13804 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:30:24.596129   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.651902   13804 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-285600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.652253   13804 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-285600" cluster setting kubeconfig missing "no-preload-285600" context setting]
	I1212 21:30:24.652697   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.674806   13804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:30:24.692277   13804 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:30:24.692277   13804 kubeadm.go:602] duration metric: took 120.82ms to restartPrimaryControlPlane
	I1212 21:30:24.692277   13804 kubeadm.go:403] duration metric: took 172.6933ms to StartCluster
	I1212 21:30:24.692277   13804 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.692277   13804 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:30:24.693507   13804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:24.694169   13804 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:30:24.694169   13804 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:30:24.694746   13804 addons.go:70] Setting storage-provisioner=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:70] Setting dashboard=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon storage-provisioner=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 config.go:182] Loaded profile config "no-preload-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:30:24.694746   13804 addons.go:70] Setting default-storageclass=true in profile "no-preload-285600"
	I1212 21:30:24.694746   13804 addons.go:239] Setting addon dashboard=true in "no-preload-285600"
	I1212 21:30:24.694746   13804 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-285600"
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	W1212 21:30:24.694746   13804 addons.go:248] addon dashboard should already be in state true
	I1212 21:30:24.694746   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.698139   13804 out.go:179] * Verifying Kubernetes components...
	I1212 21:30:24.704555   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.704612   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.705748   13804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:24.762431   13804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:30:24.762431   13804 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:30:24.764424   13804 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.764424   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:30:24.767454   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.767454   13804 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:30:24.769433   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:30:24.769433   13804 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:30:24.773442   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.780427   13804 addons.go:239] Setting addon default-storageclass=true in "no-preload-285600"
	I1212 21:30:24.780427   13804 host.go:66] Checking if "no-preload-285600" exists ...
	I1212 21:30:24.787430   13804 cli_runner.go:164] Run: docker container inspect no-preload-285600 --format={{.State.Status}}
	I1212 21:30:24.820427   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.826439   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.837426   13804 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:24.837426   13804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:30:24.840425   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:24.872429   13804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:30:24.893413   13804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62838 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-285600\id_rsa Username:docker}
	I1212 21:30:24.963677   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:30:24.963677   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:30:24.967679   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:30:24.982575   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:30:24.982575   13804 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:30:25.004580   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:30:25.004580   13804 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:30:25.025729   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:30:25.025729   13804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:30:25.051800   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:30:25.053624   13804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-285600
	I1212 21:30:25.061392   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:30:25.061392   13804 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:30:25.072688   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.072688   13804 retry.go:31] will retry after 158.823977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.110005   13804 node_ready.go:35] waiting up to 6m0s for node "no-preload-285600" to be "Ready" ...
	I1212 21:30:25.146675   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:30:25.146675   13804 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:30:25.168917   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:30:25.168917   13804 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:30:25.190262   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:30:25.190262   13804 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 21:30:25.237134   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.255181   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.255181   13804 retry.go:31] will retry after 222.613203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.258203   13804 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:25.258203   13804 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:30:25.281581   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.360910   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.360910   13804 retry.go:31] will retry after 528.174411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:25.396771   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.396771   13804 retry.go:31] will retry after 334.337457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.483899   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:25.562673   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.562738   13804 retry.go:31] will retry after 526.924446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.736852   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:25.814449   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.814449   13804 retry.go:31] will retry after 242.822318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.895040   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:25.976722   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:25.976722   13804 retry.go:31] will retry after 649.835265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.062555   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:30:26.094920   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:26.173577   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.173577   13804 retry.go:31] will retry after 303.723342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:26.206503   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.206503   13804 retry.go:31] will retry after 711.474393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.482577   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:26.584453   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.584453   13804 retry.go:31] will retry after 1.214394493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.632132   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:26.707550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.707577   13804 retry.go:31] will retry after 679.917817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:26.923400   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:27.004405   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.004405   13804 retry.go:31] will retry after 921.431314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.393372   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:27.464948   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.464948   13804 retry.go:31] will retry after 1.86941024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.806617   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:27.880154   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.880250   13804 retry.go:31] will retry after 870.607292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:27.930624   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:28.010568   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.010568   13804 retry.go:31] will retry after 1.688030068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.756973   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:28.854322   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:28.854322   13804 retry.go:31] will retry after 1.72717743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.339399   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:29.418550   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.418550   13804 retry.go:31] will retry after 2.160026616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.704224   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:29.784607   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:29.784607   13804 retry.go:31] will retry after 1.396897779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.585867   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:30.664243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:30.664314   13804 retry.go:31] will retry after 3.060722722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.188925   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:31.270881   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.270881   13804 retry.go:31] will retry after 3.544218054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.584146   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:31.661710   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:31.661710   13804 retry.go:31] will retry after 3.805789738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.730718   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:33.815337   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:33.815337   13804 retry.go:31] will retry after 4.430320375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.819397   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:34.899243   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:34.899243   13804 retry.go:31] will retry after 6.309363077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:35.143657   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:35.473027   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:35.571773   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:35.571773   13804 retry.go:31] will retry after 2.80996556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.250480   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:38.332990   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.332990   13804 retry.go:31] will retry after 8.351867848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.387198   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:38.470982   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:38.470982   13804 retry.go:31] will retry after 8.954426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.214251   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:41.296230   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:41.296230   13804 retry.go:31] will retry after 7.46364933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:45.188063   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:46.689378   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:46.780060   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:46.780173   13804 retry.go:31] will retry after 7.773373788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.432175   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:47.509090   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:47.509090   13804 retry.go:31] will retry after 12.066548893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.765276   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:30:48.850081   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:48.850081   13804 retry.go:31] will retry after 11.297010825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:52.137302    3280 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:30:52.138027    3280 kubeadm.go:319] 
	I1212 21:30:52.138843    3280 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:30:52.141943    3280 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:30:52.143509    3280 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:30:52.143682    3280 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:30:52.143737    3280 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1212 21:30:52.143737    3280 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_INET: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1212 21:30:52.144310    3280 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1212 21:30:52.144995    3280 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1212 21:30:52.145604    3280 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1212 21:30:52.146177    3280 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1212 21:30:52.146242    3280 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1212 21:30:52.146317    3280 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1212 21:30:52.146393    3280 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1212 21:30:52.146451    3280 kubeadm.go:319] OS: Linux
	I1212 21:30:52.146525    3280 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:30:52.146600    3280 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:30:52.146675    3280 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:30:52.146751    3280 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:30:52.146798    3280 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:30:52.146881    3280 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:30:52.147438    3280 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:30:52.147438    3280 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:30:52.149720    3280 out.go:252]   - Generating certificates and keys ...
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:30:52.150302    3280 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:30:52.150831    3280 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:30:52.150938    3280 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:30:52.151461    3280 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:30:52.151568    3280 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:30:52.151653    3280 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:30:52.151698    3280 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:30:52.152300    3280 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:30:52.154451    3280 out.go:252]   - Booting up control plane ...
	I1212 21:30:52.154764    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:30:52.154956    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:30:52.155143    3280 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:30:52.155412    3280 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:30:52.155651    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:30:52.155876    3280 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:30:52.156043    3280 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:30:52.156043    3280 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226136s
	I1212 21:30:52.156043    3280 kubeadm.go:319] 
	I1212 21:30:52.156043    3280 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:30:52.156043    3280 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:30:52.156809    3280 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:30:52.156973    3280 kubeadm.go:319] 
	I1212 21:30:52.156973    3280 kubeadm.go:403] duration metric: took 8m3.9483682s to StartCluster
	I1212 21:30:52.156973    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:52.160832    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:52.223294    3280 cri.go:89] found id: ""
	I1212 21:30:52.223294    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.223294    3280 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:52.223294    3280 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:52.227810    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:52.274653    3280 cri.go:89] found id: ""
	I1212 21:30:52.274653    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.274653    3280 logs.go:284] No container was found matching "etcd"
	I1212 21:30:52.274653    3280 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:52.279047    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:52.320887    3280 cri.go:89] found id: ""
	I1212 21:30:52.320887    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.320887    3280 logs.go:284] No container was found matching "coredns"
	I1212 21:30:52.320887    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:52.323880    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:52.368122    3280 cri.go:89] found id: ""
	I1212 21:30:52.368122    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.368122    3280 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:52.368122    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:52.372480    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:52.416439    3280 cri.go:89] found id: ""
	I1212 21:30:52.416439    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.416439    3280 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:52.416439    3280 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:52.420746    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:52.464733    3280 cri.go:89] found id: ""
	I1212 21:30:52.464800    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.464800    3280 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:52.464800    3280 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:52.469057    3280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:52.512080    3280 cri.go:89] found id: ""
	I1212 21:30:52.512158    3280 logs.go:282] 0 containers: []
	W1212 21:30:52.512158    3280 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:52.512158    3280 logs.go:123] Gathering logs for Docker ...
	I1212 21:30:52.512158    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:30:52.543781    3280 logs.go:123] Gathering logs for container status ...
	I1212 21:30:52.543781    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:52.588290    3280 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:52.588290    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:52.653033    3280 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:52.653033    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:52.693931    3280 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:52.693931    3280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:52.781976    3280 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:30:52.773234   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.774514   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.775469   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.776968   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:30:52.777917   10313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:30:52.781976    3280 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:30:52.781976    3280 out.go:285] * 
	W1212 21:30:52.781976    3280 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.783438    3280 out.go:285] * 
	W1212 21:30:52.785599    3280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:30:52.791153    3280 out.go:203] 
	W1212 21:30:52.795058    3280 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226136s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:30:52.795120    3280 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:30:52.795120    3280 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:30:52.797749    3280 out.go:203] 
	I1212 21:30:54.559164   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:30:54.668798   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:54.668798   13804 retry.go:31] will retry after 9.183824945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:30:55.224252   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:30:59.581067   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:30:59.656772   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:30:59.656772   13804 retry.go:31] will retry after 12.343146112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:00.152832   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:00.258393   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:00.258393   13804 retry.go:31] will retry after 14.175931828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:03.857903   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:31:03.940387   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:03.940495   13804 retry.go:31] will retry after 12.961917726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:05.261194   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:12.006287   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:12.116839   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:12.116839   13804 retry.go:31] will retry after 16.436096416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:14.440602   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:14.524069   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:14.524069   13804 retry.go:31] will retry after 28.643403029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:15.302381   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:16.907534   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:31:16.996503   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:16.996503   13804 retry.go:31] will retry after 47.424716965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:25.338083   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:28.558807   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:28.646531   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:28.646531   13804 retry.go:31] will retry after 25.840068373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:35.377503   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:43.173479   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:31:43.254885   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:31:43.254962   13804 retry.go:31] will retry after 19.184111843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:45.418241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:31:54.493389   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:31:54.576619   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:31:54.577177   13804 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 21:31:55.455424   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:02.444841   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:02.530332   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:02.530332   13804 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:32:04.426968   13804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:04.514916   13804 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:04.514916   13804 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:32:04.517987   13804 out.go:179] * Enabled addons: 
	I1212 21:32:04.521222   13804 addons.go:530] duration metric: took 1m39.8254383s for enable addons: enabled=[]
	W1212 21:32:05.496160   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:15.537579   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	
	
	==> Docker <==
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.897992617Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898182835Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898196437Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898201637Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898208938Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898237241Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:22:44 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:44.898288445Z" level=info msg="Initializing buildkit"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.027186712Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035180467Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035400987Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035429690Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:22:45 newest-cni-449900 dockerd[1190]: time="2025-12-12T21:22:45.035467194Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:22:45 newest-cni-449900 cri-dockerd[1484]: time="2025-12-12T21:22:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:22:45 newest-cni-449900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:32:26.460773   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:32:26.462228   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:32:26.463651   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:32:26.464950   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:32:26.465831   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.497307] CPU: 8 PID: 454817 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f913e6c4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f913e6c4af6.
	[  +0.000001] RSP: 002b:00007ffd0c4e19c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000034] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.811337] CPU: 0 PID: 454944 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe620208b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fe620208af6.
	[  +0.000001] RSP: 002b:00007ffc944a0d80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:32:26 up  2:34,  0 user,  load average: 0.65, 1.43, 2.92
	Linux newest-cni-449900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:32:23 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:23 newest-cni-449900 kubelet[12092]: E1212 21:32:23.631270   12092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:32:23 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:32:23 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:32:24 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 442.
	Dec 12 21:32:24 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:24 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:24 newest-cni-449900 kubelet[12103]: E1212 21:32:24.346519   12103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:32:24 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:32:24 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 443.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:25 newest-cni-449900 kubelet[12125]: E1212 21:32:25.112973   12125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 444.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:25 newest-cni-449900 kubelet[12153]: E1212 21:32:25.899576   12153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:32:25 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:32:26 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 445.
	Dec 12 21:32:26 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:32:26 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 6 (584.9146ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:32:27.517937   10924 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-449900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (91.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (380.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1212 21:32:31.927813   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:34.498959   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:44.601799   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:47.872419   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:33:00.163614   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:33:01.179563   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:33:02.205779   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:33:05.627124   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:34:23.237468   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:34:28.701598   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:34:54.847442   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:13.439645   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:18.040256   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:22.830358   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:31.911494   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:33.653560   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:35:38.454768   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:36:17.934760   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m14.2355035s)

                                                
                                                
-- stdout --
	* [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	* Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-449900
helpers_test.go:244: (dbg) docker inspect newest-cni-449900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a",
	        "Created": "2025-12-12T21:22:35.195234972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:32:31.209250611Z",
	            "FinishedAt": "2025-12-12T21:32:28.637338591Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hosts",
	        "LogPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a-json.log",
	        "Name": "/newest-cni-449900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-449900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-449900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-449900",
	                "Source": "/var/lib/docker/volumes/newest-cni-449900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-449900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-449900",
	                "name.minikube.sigs.k8s.io": "newest-cni-449900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f31e676235f907b98ae0eadc005b2979e05ef379d1d48aaed62fce9b8873d74",
	            "SandboxKey": "/var/run/docker/netns/3f31e676235f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63036"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63040"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-449900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bcedcac448e9e1d98fcddd7097fe310c50b6a637d5f23ebf519e961f822823ab",
	                    "EndpointID": "541ab21703cfa47e96a4b680fdee798dc399db4bccf57c1de0d2c6586095d103",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-449900",
	                        "8fae8198a0e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (594.5611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25: (1.699011s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ delete  │ -p old-k8s-version-246400                                                                                                                                                                                                  │ old-k8s-version-246400       │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460204251Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460292361Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460303163Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460308363Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460314664Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460334266Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460365970Z" level=info msg="Initializing buildkit"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.559170137Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564331352Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564491671Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564517274Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564565579Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:32:39 newest-cni-449900 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:48.299052   19364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:48.300121   19364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:48.301279   19364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:48.303049   19364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:48.304101   19364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:38:48 up  2:40,  0 user,  load average: 1.18, 1.12, 2.26
	Linux newest-cni-449900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:38:44 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:45 newest-cni-449900 kubelet[19193]: E1212 21:38:45.334003   19193 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:45 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:46 newest-cni-449900 kubelet[19207]: E1212 21:38:46.139991   19207 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:46 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:46 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:46 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 12 21:38:46 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:46 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:47 newest-cni-449900 kubelet[19235]: E1212 21:38:47.070302   19235 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:47 newest-cni-449900 kubelet[19250]: E1212 21:38:47.815087   19250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:47 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (579.3443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-449900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (380.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:36:36.514931   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:36:54.984347   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:37:01.530599   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:37:31.933412   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:37:34.502856   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:37:44.606750   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:37:47.876683   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:38:00.168283   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:38:01.184939   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:38:05.631688   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:39:10.950569   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:39:24.256875   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:39:54.852177   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:40:13.444683   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:40:18.045588   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:40:22.836358   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:40:31.916104   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:40:33.658822   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:40:38.459881   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:41:45.909195   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:41:56.728230   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:42:31.938281   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:42:34.508456   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:42:44.610754   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:42:47.881545   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:43:00.173707   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:43:01.189436   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:43:05.636713   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:43:57.578607   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:44:54.857173   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:45:13.449484   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:45:18.050256   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:45:22.841027   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 2 (616.4738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:30:13.371959374Z",
	            "FinishedAt": "2025-12-12T21:30:09.786882361Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91bcdd83bbb23ae9c67dcec01b8d4c16af48c7f986914ad0290fdd4a6c1ce136",
	            "SandboxKey": "/var/run/docker/netns/91bcdd83bbb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62840"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62841"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62842"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "a19528b5ba1e129df46a773b4e6c518e041141c1355dc620986fcd6472d55808",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 2 (576.3105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
E1212 21:45:31.921443   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (1.6914345s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	│ image   │ newest-cni-449900 image list --format=json                                                                                                                                                                                 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ pause   │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ unpause │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ delete  │ -p newest-cni-449900                                                                                                                                                                                                       │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	│ delete  │ -p newest-cni-449900                                                                                                                                                                                                       │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
E1212 21:45:33.663591   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-246400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732391828Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732480039Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732490940Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732497041Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732552048Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732584552Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732619056Z" level=info msg="Initializing buildkit"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.834443812Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839552952Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839689269Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839754977Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839713872Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:45:31.966567   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:45:31.967637   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:45:31.969202   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:45:31.970569   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:45:31.974155   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:45:32 up  2:47,  0 user,  load average: 0.34, 0.58, 1.61
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:45:29 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:45:29 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 12 21:45:29 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:29 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:29 no-preload-285600 kubelet[16796]: E1212 21:45:29.763137   16796 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:45:29 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:45:29 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:45:30 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 12 21:45:30 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:30 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:30 no-preload-285600 kubelet[16825]: E1212 21:45:30.521863   16825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:45:30 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:45:30 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1210.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:31 no-preload-285600 kubelet[16837]: E1212 21:45:31.286566   16837 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:45:31 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:45:31 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1211.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:31 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:45:32 no-preload-285600 kubelet[16968]: E1212 21:45:32.051606   16968 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:45:32 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:45:32 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 2 (596.4905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-449900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (587.8268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-449900 -n newest-cni-449900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (582.6697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-449900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (605.7821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-449900 -n newest-cni-449900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (587.4654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-449900
helpers_test.go:244: (dbg) docker inspect newest-cni-449900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a",
	        "Created": "2025-12-12T21:22:35.195234972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:32:31.209250611Z",
	            "FinishedAt": "2025-12-12T21:32:28.637338591Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hosts",
	        "LogPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a-json.log",
	        "Name": "/newest-cni-449900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-449900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-449900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-449900",
	                "Source": "/var/lib/docker/volumes/newest-cni-449900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-449900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-449900",
	                "name.minikube.sigs.k8s.io": "newest-cni-449900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f31e676235f907b98ae0eadc005b2979e05ef379d1d48aaed62fce9b8873d74",
	            "SandboxKey": "/var/run/docker/netns/3f31e676235f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63036"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63040"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-449900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bcedcac448e9e1d98fcddd7097fe310c50b6a637d5f23ebf519e961f822823ab",
	                    "EndpointID": "541ab21703cfa47e96a4b680fdee798dc399db4bccf57c1de0d2c6586095d103",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-449900",
	                        "8fae8198a0e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (579.0947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25: (1.6900354s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	│ image   │ newest-cni-449900 image list --format=json                                                                                                                                                                                 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ pause   │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ unpause │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460204251Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460292361Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460303163Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460308363Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460314664Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460334266Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460365970Z" level=info msg="Initializing buildkit"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.559170137Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564331352Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564491671Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564517274Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564565579Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:32:39 newest-cni-449900 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:57.427097   19807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:57.428007   19807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:57.429177   19807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:57.430572   19807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:57.431657   19807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:38:57 up  2:40,  0 user,  load average: 1.31, 1.15, 2.26
	Linux newest-cni-449900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:54 newest-cni-449900 kubelet[19614]: E1212 21:38:54.808369   19614 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:54 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:55 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 12 21:38:55 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:55 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:55 newest-cni-449900 kubelet[19642]: E1212 21:38:55.567811   19642 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:55 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:55 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:56 newest-cni-449900 kubelet[19670]: E1212 21:38:56.311378   19670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:56 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:57 newest-cni-449900 kubelet[19701]: E1212 21:38:57.077241   19701 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:38:57 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:38:57 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (592.5175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-449900" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-449900
helpers_test.go:244: (dbg) docker inspect newest-cni-449900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a",
	        "Created": "2025-12-12T21:22:35.195234972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:32:31.209250611Z",
	            "FinishedAt": "2025-12-12T21:32:28.637338591Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/hosts",
	        "LogPath": "/var/lib/docker/containers/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a/8fae8198a0e2f6167bc49d5761b5dae3aaaf06af529e7d9526a1f943f9f5952a-json.log",
	        "Name": "/newest-cni-449900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-449900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-449900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a9bf3c0ee4eaaabafc20e3de9d1f9691ed63701dacc3088a1369c1bfb243b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-449900",
	                "Source": "/var/lib/docker/volumes/newest-cni-449900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-449900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-449900",
	                "name.minikube.sigs.k8s.io": "newest-cni-449900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f31e676235f907b98ae0eadc005b2979e05ef379d1d48aaed62fce9b8873d74",
	            "SandboxKey": "/var/run/docker/netns/3f31e676235f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63036"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63040"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-449900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bcedcac448e9e1d98fcddd7097fe310c50b6a637d5f23ebf519e961f822823ab",
	                    "EndpointID": "541ab21703cfa47e96a4b680fdee798dc399db4bccf57c1de0d2c6586095d103",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-449900",
	                        "8fae8198a0e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (601.4029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-449900 logs -n 25: (1.668537s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ stop    │ -p default-k8s-diff-port-124600 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	│ image   │ newest-cni-449900 image list --format=json                                                                                                                                                                                 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ pause   │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ unpause │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460204251Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460292361Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460303163Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460308363Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460314664Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460334266Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.460365970Z" level=info msg="Initializing buildkit"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.559170137Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564331352Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564491671Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564517274Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:32:39 newest-cni-449900 dockerd[924]: time="2025-12-12T21:32:39.564565579Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:32:39 newest-cni-449900 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:32:40 newest-cni-449900 cri-dockerd[1218]: time="2025-12-12T21:32:40Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:32:40 newest-cni-449900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:39:02.234536   20033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:39:02.235546   20033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:39:02.236929   20033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:39:02.238427   20033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:39:02.240669   20033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:39:02 up  2:40,  0 user,  load average: 1.45, 1.18, 2.26
	Linux newest-cni-449900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:38:59 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:38:59 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 12 21:38:59 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:38:59 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:00 newest-cni-449900 kubelet[19876]: E1212 21:39:00.077791   19876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:00 newest-cni-449900 kubelet[19908]: E1212 21:39:00.808875   19908 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:39:00 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:39:01 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
	Dec 12 21:39:01 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:01 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:01 newest-cni-449900 kubelet[19919]: E1212 21:39:01.587734   19919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:39:01 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:39:01 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:39:02 newest-cni-449900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
	Dec 12 21:39:02 newest-cni-449900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:02 newest-cni-449900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:39:02 newest-cni-449900 kubelet[20042]: E1212 21:39:02.321774   20042 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:39:02 newest-cni-449900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:39:02 newest-cni-449900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-449900 -n newest-cni-449900: exit status 2 (592.872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-449900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (13.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (223.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:45:38.464257   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:47:31.943685   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:47:34.513621   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-124600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:47:44.616615   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:47:47.886694   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:48:00.177426   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:48:01.194515   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:48:05.641196   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:48:21.141861   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1212 21:48:55.032320   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62842/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 2 (716.8551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-285600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-285600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (0s)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-285600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-285600
helpers_test.go:244: (dbg) docker inspect no-preload-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941",
	        "Created": "2025-12-12T21:19:18.160660304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:30:13.371959374Z",
	            "FinishedAt": "2025-12-12T21:30:09.786882361Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hostname",
	        "HostsPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/hosts",
	        "LogPath": "/var/lib/docker/containers/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941/87204d70769439b8818709d2f126a3a63b54c8db7514b1f84d9f03f52b0ff941-json.log",
	        "Name": "/no-preload-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e-init/diff:/var/lib/docker/overlay2/98a48e148b465d6f3cfef773b7defe35ccd4e6e0eb3c49d8b329f27b5b52fa09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f57614ba244d0d3b05389aca0ac3118a52910d8ae213a8eb43d4329923d883e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-285600",
	                "Source": "/var/lib/docker/volumes/no-preload-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-285600",
	                "name.minikube.sigs.k8s.io": "no-preload-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91bcdd83bbb23ae9c67dcec01b8d4c16af48c7f986914ad0290fdd4a6c1ce136",
	            "SandboxKey": "/var/run/docker/netns/91bcdd83bbb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62840"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62841"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62842"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "eade8f7b7c484afc6ac9fb22c89a1319287a418e545549931d13e1b2247abede",
	                    "EndpointID": "a19528b5ba1e129df46a773b4e6c518e041141c1355dc620986fcd6472d55808",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-285600",
	                        "87204d707694"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 2 (586.8923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-285600 logs -n 25: (2.5066303s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ embed-certs-729900 image list --format=json                                                                                                                                                                                │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ pause   │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ unpause │ -p embed-certs-729900 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:22 UTC │ 12 Dec 25 21:23 UTC │
	│ delete  │ -p embed-certs-729900                                                                                                                                                                                                      │ embed-certs-729900           │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-124600 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:23 UTC │ 12 Dec 25 21:23 UTC │
	│ unpause │ -p default-k8s-diff-port-124600 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ delete  │ -p default-k8s-diff-port-124600                                                                                                                                                                                            │ default-k8s-diff-port-124600 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:24 UTC │ 12 Dec 25 21:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-285600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │                     │
	│ stop    │ -p no-preload-285600 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ addons  │ enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │ 12 Dec 25 21:30 UTC │
	│ start   │ -p no-preload-285600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-285600            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-449900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:30 UTC │                     │
	│ stop    │ -p newest-cni-449900 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │ 12 Dec 25 21:32 UTC │
	│ start   │ -p newest-cni-449900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:32 UTC │                     │
	│ image   │ newest-cni-449900 image list --format=json                                                                                                                                                                                 │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ pause   │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ unpause │ -p newest-cni-449900 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ delete  │ -p newest-cni-449900                                                                                                                                                                                                       │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	│ delete  │ -p newest-cni-449900                                                                                                                                                                                                       │ newest-cni-449900            │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:32:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:32:30.067892    4248 out.go:360] Setting OutFile to fd 948 ...
	I1212 21:32:30.114132    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.114655    4248 out.go:374] Setting ErrFile to fd 1420...
	I1212 21:32:30.114655    4248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:32:30.127088    4248 out.go:368] Setting JSON to false
	I1212 21:32:30.129182    4248 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9288,"bootTime":1765565862,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 21:32:30.129182    4248 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 21:32:30.137179    4248 out.go:179] * [newest-cni-449900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 21:32:30.141601    4248 notify.go:221] Checking for updates...
	I1212 21:32:30.144580    4248 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:30.147458    4248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:32:30.149957    4248 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 21:32:30.152754    4248 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:32:30.158286    4248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:32:30.162328    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:30.162996    4248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:32:30.280643    4248 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 21:32:30.284638    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.515373    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.495157348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.521131    4248 out.go:179] * Using the docker driver based on existing profile
	I1212 21:32:30.522918    4248 start.go:309] selected driver: docker
	I1212 21:32:30.523440    4248 start.go:927] validating driver "docker" against &{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.523530    4248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:32:30.607623    4248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:32:30.833176    4248 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 21:32:30.815233925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 21:32:30.833176    4248 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:32:30.833176    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:30.833176    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:30.834178    4248 start.go:353] cluster config:
	{Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:30.837195    4248 out.go:179] * Starting "newest-cni-449900" primary control-plane node in "newest-cni-449900" cluster
	I1212 21:32:30.839178    4248 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 21:32:30.842177    4248 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:32:30.845192    4248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:32:30.845192    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:30.846208    4248 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1212 21:32:30.846208    4248 cache.go:65] Caching tarball of preloaded images
	I1212 21:32:30.846208    4248 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 21:32:30.846208    4248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1212 21:32:30.846208    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:30.923261    4248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:32:30.923313    4248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:32:30.923343    4248 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:32:30.923343    4248 start.go:360] acquireMachinesLock for newest-cni-449900: {Name:mk48eea84400331f4298a4f493f82f3b8d104477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:32:30.923343    4248 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-449900"
	I1212 21:32:30.923343    4248 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:32:30.923343    4248 fix.go:54] fixHost starting: 
	I1212 21:32:30.931113    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:30.993597    4248 fix.go:112] recreateIfNeeded on newest-cni-449900: state=Stopped err=<nil>
	W1212 21:32:30.993597    4248 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:32:30.996597    4248 out.go:252] * Restarting existing docker container for "newest-cni-449900" ...
	I1212 21:32:31.000598    4248 cli_runner.go:164] Run: docker start newest-cni-449900
	I1212 21:32:31.538801    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:31.592240    4248 kic.go:430] container "newest-cni-449900" state is running.
	I1212 21:32:31.597242    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:31.648243    4248 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\config.json ...
	I1212 21:32:31.650253    4248 machine.go:94] provisionDockerMachine start ...
	I1212 21:32:31.653233    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:31.705233    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:31.706241    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:31.706241    4248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:32:31.708237    4248 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:32:34.889437    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:34.889437    4248 ubuntu.go:182] provisioning hostname "newest-cni-449900"
	I1212 21:32:34.893345    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:34.953803    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:34.953803    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:34.953803    4248 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-449900 && echo "newest-cni-449900" | sudo tee /etc/hostname
	W1212 21:32:35.616553   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:35.153212    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-449900
	
	I1212 21:32:35.157498    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.214117    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:35.214117    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:35.214117    4248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-449900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-449900/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-449900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:32:35.408770    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:35.408770    4248 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1212 21:32:35.408770    4248 ubuntu.go:190] setting up certificates
	I1212 21:32:35.408770    4248 provision.go:84] configureAuth start
	I1212 21:32:35.413085    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:35.465735    4248 provision.go:143] copyHostCerts
	I1212 21:32:35.466783    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1212 21:32:35.466783    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1212 21:32:35.466783    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 21:32:35.468113    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1212 21:32:35.468113    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1212 21:32:35.468113    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1212 21:32:35.469268    4248 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1212 21:32:35.469268    4248 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1212 21:32:35.469268    4248 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 21:32:35.469826    4248 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-449900 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-449900]
	I1212 21:32:35.674609    4248 provision.go:177] copyRemoteCerts
	I1212 21:32:35.678598    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:32:35.681582    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:35.739388    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:35.872528    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 21:32:35.899869    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 21:32:35.927844    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:32:35.957449    4248 provision.go:87] duration metric: took 548.67ms to configureAuth
	I1212 21:32:35.957449    4248 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:32:35.957975    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:35.961590    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.020998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.021502    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.021530    4248 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 21:32:36.199792    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 21:32:36.199792    4248 ubuntu.go:71] root file system type: overlay
	I1212 21:32:36.199792    4248 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 21:32:36.203186    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.259998    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.260186    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.260186    4248 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 21:32:36.441991    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 21:32:36.446390    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.501449    4248 main.go:143] libmachine: Using SSH client type: native
	I1212 21:32:36.501449    4248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6c5a3fd00] 0x7ff6c5a42860 <nil>  [] 0s} 127.0.0.1 63036 <nil> <nil>}
	I1212 21:32:36.501449    4248 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 21:32:36.684877    4248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:32:36.684877    4248 machine.go:97] duration metric: took 5.0345422s to provisionDockerMachine
	I1212 21:32:36.684877    4248 start.go:293] postStartSetup for "newest-cni-449900" (driver="docker")
	I1212 21:32:36.684877    4248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:32:36.689141    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:32:36.692539    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:36.754930    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:36.889809    4248 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:32:36.897890    4248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:32:36.897963    4248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:32:36.897963    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1212 21:32:36.898205    4248 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1212 21:32:36.898730    4248 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem -> 133962.pem in /etc/ssl/certs
	I1212 21:32:36.903242    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:32:36.916977    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /etc/ssl/certs/133962.pem (1708 bytes)
	I1212 21:32:36.944800    4248 start.go:296] duration metric: took 259.9189ms for postStartSetup
	I1212 21:32:36.949417    4248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:32:36.952948    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.004507    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.144023    4248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:32:37.153150    4248 fix.go:56] duration metric: took 6.229706s for fixHost
	I1212 21:32:37.153150    4248 start.go:83] releasing machines lock for "newest-cni-449900", held for 6.229706s
	I1212 21:32:37.156410    4248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-449900
	I1212 21:32:37.223471    4248 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1212 21:32:37.227811    4248 ssh_runner.go:195] Run: cat /version.json
	I1212 21:32:37.227811    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.230777    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:37.282580    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:37.288636    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	W1212 21:32:37.396194    4248 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1212 21:32:37.419380    4248 ssh_runner.go:195] Run: systemctl --version
	I1212 21:32:37.433080    4248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:32:37.443489    4248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:32:37.447019    4248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:32:37.460679    4248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:32:37.460679    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:37.460679    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:37.460679    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:37.488659    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1212 21:32:37.507342    4248 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1212 21:32:37.507342    4248 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1212 21:32:37.507342    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 21:32:37.523982    4248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 21:32:37.528069    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 21:32:37.548632    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.567777    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 21:32:37.588723    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 21:32:37.608062    4248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:32:37.629201    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 21:32:37.649560    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 21:32:37.667527    4248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 21:32:37.690529    4248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:32:37.710687    4248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:32:37.729958    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:37.888501    4248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 21:32:38.016543    4248 start.go:496] detecting cgroup driver to use...
	I1212 21:32:38.016543    4248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:32:38.021759    4248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 21:32:38.045532    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.071354    4248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:32:38.150544    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:32:38.173582    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 21:32:38.193077    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:32:38.218979    4248 ssh_runner.go:195] Run: which cri-dockerd
	I1212 21:32:38.231122    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 21:32:38.246140    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1212 21:32:38.271127    4248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 21:32:38.414548    4248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 21:32:38.549270    4248 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 21:32:38.549270    4248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 21:32:38.574682    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1212 21:32:38.596980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:38.746077    4248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 21:32:39.571827    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:32:39.594550    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1212 21:32:39.618807    4248 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1212 21:32:39.645739    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:39.668299    4248 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 21:32:39.807124    4248 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 21:32:39.946119    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.086356    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 21:32:40.115320    4248 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1212 21:32:40.136980    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:40.275781    4248 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1212 21:32:40.384906    4248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1212 21:32:40.403362    4248 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 21:32:40.407665    4248 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 21:32:40.417070    4248 start.go:564] Will wait 60s for crictl version
	I1212 21:32:40.421802    4248 ssh_runner.go:195] Run: which crictl
	I1212 21:32:40.435075    4248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:32:40.477353    4248 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1212 21:32:40.481633    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.526598    4248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 21:32:40.566170    4248 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1212 21:32:40.570307    4248 cli_runner.go:164] Run: docker exec -t newest-cni-449900 dig +short host.docker.internal
	I1212 21:32:40.704237    4248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 21:32:40.709308    4248 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 21:32:40.719244    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:40.739290    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:40.797774    4248 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:32:40.799861    4248 kubeadm.go:884] updating cluster {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:32:40.800240    4248 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1212 21:32:40.804298    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.837317    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.837317    4248 docker.go:621] Images already preloaded, skipping extraction
	I1212 21:32:40.841250    4248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 21:32:40.875188    4248 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 21:32:40.875188    4248 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:32:40.875188    4248 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1212 21:32:40.875753    4248 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-449900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:32:40.879753    4248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 21:32:40.954718    4248 cni.go:84] Creating CNI manager for ""
	I1212 21:32:40.954718    4248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 21:32:40.954718    4248 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 21:32:40.954718    4248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-449900 NodeName:newest-cni-449900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:32:40.954718    4248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-449900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:32:40.959727    4248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:32:40.972418    4248 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:32:40.977111    4248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:32:40.988704    4248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 21:32:41.011653    4248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:32:41.032100    4248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1212 21:32:41.059217    4248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:32:41.066089    4248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:32:41.085707    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:41.226868    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:41.248749    4248 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900 for IP: 192.168.85.2
	I1212 21:32:41.248749    4248 certs.go:195] generating shared ca certs ...
	I1212 21:32:41.248749    4248 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:41.249389    4248 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1212 21:32:41.249651    4248 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1212 21:32:41.249726    4248 certs.go:257] generating profile certs ...
	I1212 21:32:41.250263    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\client.key
	I1212 21:32:41.250513    4248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key.67e5e88d
	I1212 21:32:41.250722    4248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key
	I1212 21:32:41.251524    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem (1338 bytes)
	W1212 21:32:41.251741    4248 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396_empty.pem, impossibly tiny 0 bytes
	I1212 21:32:41.251826    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1212 21:32:41.252063    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 21:32:41.252220    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 21:32:41.252398    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1212 21:32:41.252710    4248 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem (1708 bytes)
	I1212 21:32:41.254134    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:32:41.288378    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 21:32:41.319186    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:32:41.346629    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:32:41.379412    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 21:32:41.407785    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:32:41.434595    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:32:41.465218    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-449900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:32:41.491104    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13396.pem --> /usr/share/ca-certificates/13396.pem (1338 bytes)
	I1212 21:32:41.526955    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\133962.pem --> /usr/share/ca-certificates/133962.pem (1708 bytes)
	I1212 21:32:41.554086    4248 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:32:41.585032    4248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:32:41.609643    4248 ssh_runner.go:195] Run: openssl version
	I1212 21:32:41.624413    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.643262    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/133962.pem /etc/ssl/certs/133962.pem
	I1212 21:32:41.661513    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.670034    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:48 /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.674026    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133962.pem
	I1212 21:32:41.722721    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:32:41.739428    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.758028    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:32:41.775766    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.783842    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:31 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.788493    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:32:41.834978    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:32:41.852716    4248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.872502    4248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13396.pem /etc/ssl/certs/13396.pem
	I1212 21:32:41.892268    4248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.902126    4248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:48 /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.906908    4248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13396.pem
	I1212 21:32:41.957407    4248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:32:41.975429    4248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:32:41.988570    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:32:42.036143    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:32:42.085875    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:32:42.134210    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:32:42.182920    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:32:42.231061    4248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:32:42.275151    4248 kubeadm.go:401] StartCluster: {Name:newest-cni-449900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-449900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:32:42.279857    4248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 21:32:42.316959    4248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:32:42.330116    4248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:32:42.330159    4248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:32:42.334136    4248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:32:42.349113    4248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:32:42.353059    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.410308    4248 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-449900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.411003    4248 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-449900" cluster setting kubeconfig missing "newest-cni-449900" context setting]
	I1212 21:32:42.411047    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.433468    4248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:32:42.450763    4248 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1212 21:32:42.450851    4248 kubeadm.go:602] duration metric: took 120.6907ms to restartPrimaryControlPlane
	I1212 21:32:42.450851    4248 kubeadm.go:403] duration metric: took 175.6977ms to StartCluster
	I1212 21:32:42.450887    4248 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.451069    4248 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 21:32:42.452318    4248 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:32:42.452708    4248 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 21:32:42.452708    4248 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:32:42.453236    4248 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:70] Setting dashboard=true in profile "newest-cni-449900"
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon dashboard=true in "newest-cni-449900"
	W1212 21:32:42.453442    4248 addons.go:248] addon dashboard should already be in state true
	I1212 21:32:42.453345    4248 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-449900"
	I1212 21:32:42.453442    4248 config.go:182] Loaded profile config "newest-cni-449900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 21:32:42.453345    4248 addons.go:70] Setting default-storageclass=true in profile "newest-cni-449900"
	I1212 21:32:42.453442    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.453548    4248 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-449900"
	I1212 21:32:42.456924    4248 out.go:179] * Verifying Kubernetes components...
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.462734    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.464017    4248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:32:42.521801    4248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:32:42.522505    4248 addons.go:239] Setting addon default-storageclass=true in "newest-cni-449900"
	I1212 21:32:42.522505    4248 host.go:66] Checking if "newest-cni-449900" exists ...
	I1212 21:32:42.524049    4248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:32:42.525755    4248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:32:42.527435    4248 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.527435    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:32:42.529307    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:32:42.529307    4248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:32:42.532224    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.534064    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.535884    4248 cli_runner.go:164] Run: docker container inspect newest-cni-449900 --format={{.State.Status}}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.589450    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.590457    4248 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.590457    4248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:32:42.593457    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.639453    4248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:32:42.655360    4248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63036 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-449900\id_rsa Username:docker}
	I1212 21:32:42.731687    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:32:42.731721    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:32:42.735286    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:42.750859    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:32:42.750859    4248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:32:42.774055    4248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-449900
	I1212 21:32:42.777032    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:32:42.777032    4248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:32:42.833790    4248 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:32:42.834403    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:32:42.834403    4248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:32:42.837834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:42.838786    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:42.862404    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:32:42.862439    4248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1212 21:32:42.938528    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.938593    4248 retry.go:31] will retry after 285.852869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:42.946574    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:32:42.946574    4248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:32:42.970519    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:32:42.970570    4248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:32:43.044060    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:32:43.044113    4248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1212 21:32:43.058105    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.058105    4248 retry.go:31] will retry after 367.133117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.065874    4248 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:32:43.065934    4248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:32:43.090839    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.170845    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.170845    4248 retry.go:31] will retry after 360.542613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.229247    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:43.312297    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.312297    4248 retry.go:31] will retry after 217.305042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.338331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:43.430298    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.514114    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.514114    4248 retry.go:31] will retry after 194.385848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.535363    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:43.536351    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:43.629787    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.629787    4248 retry.go:31] will retry after 687.212662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:32:43.630799    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.630799    4248 retry.go:31] will retry after 521.23237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.714316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:43.819832    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.819832    4248 retry.go:31] will retry after 810.162007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:43.838681    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.158533    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:44.239462    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.239462    4248 retry.go:31] will retry after 722.851273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.322823    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:44.338752    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:44.412482    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.412540    4248 retry.go:31] will retry after 687.458608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.636450    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:44.715327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.715327    4248 retry.go:31] will retry after 881.564419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:44.839869    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:44.968341    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:45.056052    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.056052    4248 retry.go:31] will retry after 1.083270835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.107611    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:45.652119   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:32:45.188500    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.188500    4248 retry.go:31] will retry after 1.051455266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.338492    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:45.601770    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:45.692383    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.692901    4248 retry.go:31] will retry after 847.636525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:45.840006    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.145354    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:46.237762    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.237762    4248 retry.go:31] will retry after 1.841338358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.245135    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:46.317745    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.317805    4248 retry.go:31] will retry after 1.659422291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.338989    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:46.545951    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:46.622899    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.622899    4248 retry.go:31] will retry after 2.146117093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:46.840053    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.339185    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.839795    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:47.982731    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:48.067392    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.067392    4248 retry.go:31] will retry after 2.380093596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.084148    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:48.170522    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.171061    4248 retry.go:31] will retry after 1.169420442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.339141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:48.775985    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:32:48.839214    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:48.853292    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:48.853292    4248 retry.go:31] will retry after 1.773821104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.339743    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:49.345080    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:49.426473    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.426473    4248 retry.go:31] will retry after 2.584062662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:49.838663    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.339208    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:50.452188    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:50.553300    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.553376    4248 retry.go:31] will retry after 3.748834475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.633099    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:50.715582    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.715582    4248 retry.go:31] will retry after 2.533349108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:50.839720    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.339390    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:51.839102    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.016916    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:52.095547    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.095725    4248 retry.go:31] will retry after 3.902877695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:52.339080    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:52.839789    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.254021    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:32:53.339386    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.339438    4248 retry.go:31] will retry after 8.052230376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:53.341078    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:53.838612    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:54.306967    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:32:54.338622    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:54.396253    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.396253    4248 retry.go:31] will retry after 4.728755572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:54.840384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:32:55.691200   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:32:55.339060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:55.838985    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.003958    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:32:56.086306    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.086306    4248 retry.go:31] will retry after 4.27813674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:56.339722    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:56.838152    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.339158    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:57.840925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.339529    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:58.839108    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.131643    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:32:59.224557    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.224611    4248 retry.go:31] will retry after 6.97751667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:32:59.340004    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:59.839304    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.339076    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:00.368989    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:00.453078    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.453078    4248 retry.go:31] will retry after 11.55737722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:00.839680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.337369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:01.396666    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:01.481506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.481506    4248 retry.go:31] will retry after 9.469717232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:01.840205    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.340807    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:02.839775    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.338747    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:03.839967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.338805    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:04.839654    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:05.730439   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:05.340177    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:05.839010    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.207461    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:33:06.310159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.310159    4248 retry.go:31] will retry after 14.485985358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:06.338617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:06.839760    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.339574    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:07.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.339168    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:08.839559    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.340617    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:09.841017    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.339655    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.840253    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:10.956764    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:11.041153    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.041153    4248 retry.go:31] will retry after 7.720343102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:11.339467    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:11.838030    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.015476    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:12.097506    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.098032    4248 retry.go:31] will retry after 13.738254929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:12.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:12.840739    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.338865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:13.841423    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.338850    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:14.839426    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:15.769327   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:15.340112    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:15.839885    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.340084    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:16.839133    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.340118    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:17.838995    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.340239    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:18.768160    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:33:18.839718    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:18.858996    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:18.859079    4248 retry.go:31] will retry after 29.319727103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:19.339526    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:19.839033    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.342579    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:20.799893    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:20.838333    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:20.898204    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:20.898204    4248 retry.go:31] will retry after 24.787260988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:21.340136    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:21.839432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.339934    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:22.838948    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.339680    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:23.839713    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.339736    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:24.841279    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:25.803331   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:25.340290    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.840116    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:25.841834    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:25.931872    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:25.931872    4248 retry.go:31] will retry after 19.805631473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:26.340454    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:26.840685    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.340143    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:27.839689    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.341476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:28.840343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.340212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:29.839319    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.338669    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:30.838810    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.338837    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:31.840375    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.339813    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:32.839324    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.339926    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:33.839761    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.341257    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:34.840212    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:33:35.842883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:35.339489    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:35.839856    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.337998    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:36.840979    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.339343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:37.839384    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.340134    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:38.840040    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.339613    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:39.841297    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.339971    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:40.839521    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.340107    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:41.841992    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.341019    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:42.840372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:42.879262    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.879262    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:42.883398    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:42.914084    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.914084    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:42.917664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:42.949277    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.949344    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:42.952925    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:42.982130    4248 logs.go:282] 0 containers: []
	W1212 21:33:42.982130    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:42.985453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:43.018015    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.018015    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:43.021515    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:43.051693    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.051693    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:43.055531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:43.086156    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.086156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:43.089864    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:43.124751    4248 logs.go:282] 0 containers: []
	W1212 21:33:43.124784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:43.124813    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:43.124844    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:43.199819    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:43.199905    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:43.266773    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:43.266773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:43.306805    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:43.306805    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:43.398810    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:43.387138    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.388868    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.391036    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.392819    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:43.394391    3347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:43.398906    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:43.398934    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:33:45.876156   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:45.691215    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:33:45.743028    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:33:45.776796    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.776826    4248 retry.go:31] will retry after 41.02577954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:33:45.824108    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.824108    4248 retry.go:31] will retry after 22.15645807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:45.933456    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:45.956650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:45.987658    4248 logs.go:282] 0 containers: []
	W1212 21:33:45.987702    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:45.991775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:46.021440    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.021486    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:46.025175    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:46.052749    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.052749    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:46.057405    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:46.089535    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.089535    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:46.093406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:46.133904    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.133904    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:46.137545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:46.167364    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.167364    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:46.172118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:46.199303    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.199335    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:46.203217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:46.233482    4248 logs.go:282] 0 containers: []
	W1212 21:33:46.233482    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:46.233482    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:46.233482    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:46.309757    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:46.300789    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.302135    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.303841    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.305512    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:46.306441    3508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:46.309757    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:46.309757    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:46.342247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:46.342247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:46.392802    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:46.392802    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:46.456332    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:46.456332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:48.184906    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 21:33:48.272776    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:48.272776    4248 retry.go:31] will retry after 31.924309129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:33:49.001621    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:49.031284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:49.062413    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.062413    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:49.067359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:49.099157    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.099157    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:49.103086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:49.145777    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.145841    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:49.148934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:49.177432    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.177432    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:49.180892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:49.208677    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.208753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:49.213124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:49.242190    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.242273    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:49.246212    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:49.271793    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.271793    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:49.275860    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:49.301204    4248 logs.go:282] 0 containers: []
	W1212 21:33:49.301304    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:49.301304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:49.301304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:49.387773    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:49.378267    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.379554    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.380906    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.382182    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:49.383498    3680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:49.387773    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:49.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:49.417403    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:49.417403    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:49.468610    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:49.468669    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:49.530801    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:49.530801    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.075422    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:52.099836    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:52.132317    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.132317    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:52.136505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:52.168253    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.168329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:52.172204    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:52.201698    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.201728    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:52.204999    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:52.232400    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.232400    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:52.235747    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:52.264373    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.264373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:52.268631    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:52.296360    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.296360    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:52.301499    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:52.328506    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.328506    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:52.332618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:52.364046    4248 logs.go:282] 0 containers: []
	W1212 21:33:52.364046    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:52.364046    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:52.364046    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:52.429893    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:52.429893    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:52.469809    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:52.469809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:52.561531    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:52.552048    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.553328    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.554454    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.555743    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:52.556766    3841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:52.561531    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:52.561531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:52.589557    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:52.589610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:33:55.915985   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:33:55.143558    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:55.169650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:55.199312    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.199312    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:55.203019    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:55.231107    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.231107    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:55.234903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:55.262662    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.262662    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:55.267031    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:55.298297    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.298297    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:55.302781    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:55.329536    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.329536    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:55.333805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:55.361920    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.361920    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:55.366064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:55.390952    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.390952    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:55.395295    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:55.420565    4248 logs.go:282] 0 containers: []
	W1212 21:33:55.420565    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:55.420565    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:55.420565    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:33:55.485763    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:55.485763    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:55.523584    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:55.524585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:55.609239    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:55.599132    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.600050    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.601408    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.603109    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:55.604199    4000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:55.609239    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:55.609239    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:55.635439    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:55.635439    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.192127    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:33:58.215045    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:33:58.245598    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.245598    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:33:58.249366    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:33:58.277778    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.277778    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:33:58.282003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:33:58.308406    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.308406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:33:58.312530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:33:58.343615    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.343615    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:33:58.347469    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:33:58.374271    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.374323    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:33:58.378940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:33:58.408013    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.408013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:33:58.412815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:33:58.442217    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.442217    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:33:58.446143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:33:58.473696    4248 logs.go:282] 0 containers: []
	W1212 21:33:58.473696    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:33:58.473696    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:33:58.473696    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:33:58.512536    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:33:58.512536    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:33:58.598847    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:33:58.588486    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.590430    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.592244    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.593816    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:33:58.594663    4159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:33:58.598847    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:33:58.598847    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:33:58.631972    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:33:58.631972    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:33:58.686549    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:33:58.686549    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.254401    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:01.280957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:01.311649    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.311649    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:01.317426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:01.347111    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.347111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:01.351587    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:01.384140    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.384140    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:01.388189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:01.414950    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.415035    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:01.419033    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:01.451573    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.451573    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:01.458437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:01.486676    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.486676    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:01.490132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:01.519922    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.519945    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:01.523525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:01.551133    4248 logs.go:282] 0 containers: []
	W1212 21:34:01.551133    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:01.551133    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:01.551226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:01.643156    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:01.629552    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.630752    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.631976    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.634745    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:01.636369    4309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:01.643201    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:01.643201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:01.670680    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:01.670680    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:01.719121    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:01.719121    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:01.778945    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:01.778945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.321335    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:04.346182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:04.379694    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.379694    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:04.383453    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:04.413770    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.413770    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:04.417438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:04.449689    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.449689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:04.453945    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:04.484307    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.484336    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:04.488052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:04.520437    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.520529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:04.523808    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:04.554411    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.554486    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:04.558132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:04.590943    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.590991    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:04.596733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:04.626155    4248 logs.go:282] 0 containers: []
	W1212 21:34:04.626155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:04.626155    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:04.626155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:04.690704    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:04.690704    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:04.737151    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:04.737151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:04.836298    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:04.823870    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.824851    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.826975    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.828203    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:04.829992    4472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:04.836298    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:04.836298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:04.866456    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:04.866456    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:05.955333   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:07.431984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:07.455983    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:07.487054    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.487054    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:07.491008    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:07.518559    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.518559    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:07.523241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:07.555141    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.555206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:07.559053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:07.586080    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.586129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:07.590415    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:07.617729    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.617802    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:07.621528    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:07.648847    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.648847    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:07.653016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:07.679589    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.679589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:07.682589    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:07.710509    4248 logs.go:282] 0 containers: []
	W1212 21:34:07.710549    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:07.710584    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:07.710584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:07.740602    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:07.740602    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:07.795596    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:07.795596    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:07.856481    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:07.856481    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:07.896541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:07.896541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:34:07.984068    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:07.988076    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:07.977659    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.978729    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.979455    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.981723    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:07.982573    4663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 21:34:08.063575    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:08.063575    4248 retry.go:31] will retry after 24.947157304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 21:34:10.493129    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:10.518032    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:10.551592    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.551592    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:10.555349    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:10.583478    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.583478    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:10.587337    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:10.614968    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.614968    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:10.618410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:10.649067    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.649067    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:10.651995    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:10.683408    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.683408    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:10.687055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:10.719380    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.719380    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:10.722379    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:10.750539    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.750539    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:10.753888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:10.783559    4248 logs.go:282] 0 containers: []
	W1212 21:34:10.783559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:10.783559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:10.783559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:10.847683    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:10.847683    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:10.886290    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:10.886290    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:10.977825    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:10.965780    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.967047    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.970464    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.971759    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:10.973064    4816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:10.977825    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:10.977825    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:11.007383    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:11.007383    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:13.563252    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:13.588423    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:13.621565    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.621565    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:13.625750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:13.654777    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.654777    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:13.658374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:13.688618    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.688672    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:13.692472    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:13.719610    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.719610    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:13.723237    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:13.752648    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.752648    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:13.756634    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:13.783829    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.783897    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:13.788161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:13.819558    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.819558    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:13.823971    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:13.852579    4248 logs.go:282] 0 containers: []
	W1212 21:34:13.852579    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:13.852650    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:13.852650    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:13.917806    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:13.917806    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:13.957291    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:13.957291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:14.045151    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:14.033435    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.035385    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.037555    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.038496    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:14.039928    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:14.045151    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:14.045151    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:14.072113    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:14.072113    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:15.995826   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:16.634542    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:16.664271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:16.693836    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.693836    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:16.697626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:16.724961    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.724961    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:16.729680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:16.759174    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.759174    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:16.764416    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:16.793992    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.793992    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:16.801287    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:16.833032    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.833032    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:16.837930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:16.867028    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.867109    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:16.871232    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:16.901023    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.901023    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:16.904729    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:16.932215    4248 logs.go:282] 0 containers: []
	W1212 21:34:16.932215    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:16.932215    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:16.932215    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:17.000205    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:17.000205    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:17.040242    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:17.040242    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:17.128411    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:17.118697    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.119770    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121001    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.121935    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:17.122994    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:17.128411    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:17.128411    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:17.154693    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:17.154693    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:19.712060    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:19.740196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:19.769381    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.769381    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:19.774003    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:19.805171    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.805171    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:19.809714    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:19.839073    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.839073    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:19.846215    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:19.874403    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.874403    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:19.878637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:19.912913    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.912913    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:19.916626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:19.944658    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.944710    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:19.948241    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:19.979179    4248 logs.go:282] 0 containers: []
	W1212 21:34:19.979241    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:19.982846    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:20.011754    4248 logs.go:282] 0 containers: []
	W1212 21:34:20.011812    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:20.011864    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:20.011913    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:20.052176    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:20.052176    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:20.143325    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:20.132505    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.133469    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.134667    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.135927    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:20.139001    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:20.143363    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:20.143410    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:20.172292    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:20.172292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:20.202316    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:34:20.222997    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:20.222997    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 21:34:20.293327    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:20.293327    4248 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:22.836246    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:22.860619    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:22.894925    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.894925    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:22.898706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:22.928391    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.928391    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:22.931509    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:22.962526    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.962526    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:22.966341    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:22.998739    4248 logs.go:282] 0 containers: []
	W1212 21:34:22.998810    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:23.002260    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:23.038485    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.038485    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:23.042357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:23.069862    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.069862    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:23.073843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:23.102066    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.102066    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:23.105940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:23.134159    4248 logs.go:282] 0 containers: []
	W1212 21:34:23.134159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:23.134159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:23.134159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:23.200245    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:23.200245    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:23.245899    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:23.245899    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:23.334171    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:23.323723    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.324634    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.327041    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.328182    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:23.329038    5477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:23.334171    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:23.334171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:23.363271    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:23.363890    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:26.035692   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:25.919925    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:25.943736    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:25.976745    4248 logs.go:282] 0 containers: []
	W1212 21:34:25.976745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:25.982399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:26.011034    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.011111    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:26.015074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:26.041930    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.041960    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:26.046384    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:26.079224    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.079224    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:26.083294    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:26.114533    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.114622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:26.118567    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:26.144766    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.144766    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:26.148635    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:26.178718    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.178773    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:26.182397    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:26.209360    4248 logs.go:282] 0 containers: []
	W1212 21:34:26.209428    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:26.209458    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:26.209458    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:26.246526    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:26.246526    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:26.333000    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:26.322480    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.324197    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.326202    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.327840    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:26.328729    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:26.333074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:26.333074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:26.364508    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:26.364508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:26.415398    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:26.415922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:26.808632    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 21:34:26.887159    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:26.887159    4248 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:28.982561    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:29.008658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:29.038321    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.038321    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:29.043365    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:29.074505    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.074505    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:29.079133    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:29.107625    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.107625    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:29.111459    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:29.141746    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.141771    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:29.145351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:29.173757    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.173757    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:29.177451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:29.207313    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.207375    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:29.211355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:29.238896    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.238896    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:29.242592    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:29.271887    4248 logs.go:282] 0 containers: []
	W1212 21:34:29.271955    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:29.271992    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:29.271992    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:29.302135    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:29.302135    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:29.354758    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:29.354787    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:29.415567    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:29.416566    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:29.460567    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:29.460567    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:29.568507    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:29.558828    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.559760    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.561215    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.564376    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:29.566192    5819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.072619    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:32.098196    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:32.129542    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.129605    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:32.133207    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:32.165871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.165871    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:32.169728    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:32.197622    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.197622    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:32.201406    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:32.229774    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.229866    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:32.233559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:32.261871    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.261922    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:32.265550    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:32.292765    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.292838    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:32.297859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:32.324309    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.324309    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:32.330216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:32.361218    4248 logs.go:282] 0 containers: []
	W1212 21:34:32.361300    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:32.361300    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:32.361300    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:32.397467    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:32.397467    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:32.486261    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:32.475376    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.476624    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.477661    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.478789    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:32.479928    5959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:32.486261    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:32.486261    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:32.514032    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:32.514583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:32.560591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:32.560639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:33.016195    4248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 21:34:33.096004    4248 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 21:34:33.096004    4248 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 21:34:33.099548    4248 out.go:179] * Enabled addons: 
	I1212 21:34:33.102664    4248 addons.go:530] duration metric: took 1m50.6481558s for enable addons: enabled=[]
	W1212 21:34:36.071150   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:35.127374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:35.151249    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:35.181629    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.181629    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:35.185603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:35.214096    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.214151    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:35.218239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:35.247180    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.247201    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:35.253408    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:35.284673    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.284673    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:35.288386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:35.317675    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.317675    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:35.320949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:35.350108    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.350178    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:35.353675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:35.385201    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.385201    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:35.388633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:35.416281    4248 logs.go:282] 0 containers: []
	W1212 21:34:35.416281    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:35.416281    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:35.416281    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:35.444773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:35.444773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:35.494185    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:35.494185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:35.554851    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:35.554851    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:35.593466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:35.593466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:35.675497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:35.665255    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.666291    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.667389    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.668747    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:35.670048    6158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:38.181493    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:38.207315    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:38.240970    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.240970    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:38.244930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:38.270853    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.270853    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:38.274626    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:38.302607    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.302607    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:38.306311    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:38.332974    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.332998    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:38.336938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:38.366523    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.366523    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:38.370763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:38.400788    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.400855    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:38.404730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:38.432105    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.432140    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:38.435718    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:38.468252    4248 logs.go:282] 0 containers: []
	W1212 21:34:38.468252    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:38.468252    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:38.468252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:38.498010    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:38.498010    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:38.549065    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:38.549065    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:38.610282    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:38.610282    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:38.649865    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:38.649865    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:38.742735    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:38.729670    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.730594    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.734869    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.735575    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:38.737749    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.248859    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:41.273451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:41.307576    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.307576    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:41.311206    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:41.341191    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.341191    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:41.344873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:41.373089    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.373089    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:41.377064    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:41.407927    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.407927    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:41.411904    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:41.438747    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.438747    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:41.442684    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:41.471705    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.471705    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:41.475643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:41.502964    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.503009    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:41.506219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:41.537468    4248 logs.go:282] 0 containers: []
	W1212 21:34:41.537468    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:41.537468    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:41.537468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:41.601385    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:41.601385    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:41.640441    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:41.640441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:41.726762    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:41.716055    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.717335    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.718444    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.719692    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:41.720641    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:41.727284    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:41.727418    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:41.753195    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:41.753250    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.308085    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:44.334644    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:44.365798    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.365798    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:44.369363    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:44.401410    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.401463    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:44.405291    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:44.434343    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.434343    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:44.438273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:44.468474    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.468525    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:44.474306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:44.500642    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.500642    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:44.504057    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:44.533188    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.533188    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:44.538912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:44.570110    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.570156    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:44.573802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:44.628237    4248 logs.go:282] 0 containers: []
	W1212 21:34:44.628237    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:44.628313    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:44.628313    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:44.695236    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:44.695236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:44.756020    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:44.756020    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:44.797607    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:44.797607    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:44.885974    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:44.873979    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.875233    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.876512    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.878696    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:44.880447    6641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:44.885974    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:44.885974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:34:46.110883   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:47.418476    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:47.448163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:47.485273    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.485367    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:47.489088    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:47.519610    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.519610    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:47.523527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:47.556797    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.556797    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:47.561198    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:47.592455    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.592486    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:47.597545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:47.642336    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.642336    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:47.646873    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:47.674652    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.674652    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:47.678167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:47.711489    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.711583    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:47.715137    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:47.743744    4248 logs.go:282] 0 containers: []
	W1212 21:34:47.743744    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:47.743744    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:47.743744    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:47.772500    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:47.772500    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:47.821703    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:47.821703    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:47.885067    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:47.886068    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:47.927691    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:47.927691    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:48.009816    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:47.997942    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:47.998780    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.001728    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003155    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:48.003997    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.515712    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:50.539433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:50.569952    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.570011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:50.573934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:50.602042    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.602042    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:50.606815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:50.637314    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.637314    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:50.641189    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:50.671374    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.671448    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:50.675158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:50.705121    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.705121    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:50.708839    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:50.736349    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.736349    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:50.740642    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:50.766780    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.766780    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:50.771844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:50.799830    4248 logs.go:282] 0 containers: []
	W1212 21:34:50.799830    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:50.799830    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:50.799935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:50.864221    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:50.864221    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:50.902900    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:50.902900    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:50.987201    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:50.977230    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.978460    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.979921    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.981707    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:50.982796    6948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:50.987245    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:50.987307    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:51.013974    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:51.013974    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:53.565470    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:53.591015    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:53.622676    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.622700    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:53.626721    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:53.656673    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.656708    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:53.660173    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:53.690897    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.690897    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:53.695672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:53.724746    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.724746    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:53.729650    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:53.755860    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.755860    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:53.761786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:53.788721    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.788721    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:53.792819    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:53.824882    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.824923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:53.827878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:53.861835    4248 logs.go:282] 0 containers: []
	W1212 21:34:53.861835    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:53.861835    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:53.861922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:53.923537    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:53.923537    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:53.963431    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:53.963431    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:54.047319    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:54.037350    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.038501    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.039589    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.040496    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:54.043315    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:54.047386    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:54.047386    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:54.072633    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:54.072633    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:34:56.149055   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:34:56.625180    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:56.650655    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:56.685280    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.685318    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:56.689156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:56.714493    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.714493    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:56.718695    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:56.746923    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.746990    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:56.750886    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:56.778419    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.778419    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:56.783916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:56.811946    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.811946    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:56.815910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:56.846245    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.846245    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:56.849750    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:56.881552    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.881612    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:56.886159    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:56.914494    4248 logs.go:282] 0 containers: []
	W1212 21:34:56.914494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:56.914494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:56.914494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:34:56.978107    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:34:56.978107    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:34:57.017141    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:34:57.017141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:34:57.105278    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:34:57.095758    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.096802    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.099867    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.100954    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:34:57.102366    7267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:34:57.105278    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:34:57.105278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:34:57.136106    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:34:57.136106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:34:59.698008    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:34:59.721859    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:34:59.752926    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.752926    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:34:59.758293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:34:59.787817    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.787817    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:34:59.792012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:34:59.820724    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.820724    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:34:59.824383    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:34:59.853943    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.853943    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:34:59.856939    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:34:59.884234    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.884234    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:34:59.887359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:34:59.917769    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.917769    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:34:59.920766    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:34:59.947735    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.947735    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:34:59.950845    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:34:59.982686    4248 logs.go:282] 0 containers: []
	W1212 21:34:59.982686    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:34:59.982686    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:34:59.982686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:00.047428    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:00.047428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:00.087722    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:00.087722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:00.173037    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:00.162970    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.163861    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.166472    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.167086    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:00.169872    7431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:00.173037    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:00.173124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:00.200722    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:00.200722    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:02.758771    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:02.785740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:02.818761    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.818761    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:02.822630    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:02.856985    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.857041    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:02.860042    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:02.891635    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.891635    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:02.896827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:02.927006    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.927006    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:02.930675    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:02.961911    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.961911    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:02.966203    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:02.995618    4248 logs.go:282] 0 containers: []
	W1212 21:35:02.995618    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:03.000489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:03.029638    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.029720    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:03.033579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:03.062226    4248 logs.go:282] 0 containers: []
	W1212 21:35:03.062226    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:03.062226    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:03.062226    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:03.123924    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:03.123924    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:03.164267    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:03.164267    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:03.278702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:03.266561    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.267580    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.268479    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.270653    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:03.271579    7583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:03.278702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:03.278702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:03.310678    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:03.310678    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:06.187241   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:05.870522    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:05.895548    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:05.931745    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.931745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:05.935905    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:05.967105    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.967167    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:05.970526    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:05.999378    4248 logs.go:282] 0 containers: []
	W1212 21:35:05.999501    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:06.005272    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:06.033559    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.033559    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:06.037432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:06.067367    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.067423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:06.071216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:06.098778    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.098778    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:06.102725    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:06.129330    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.129373    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:06.133426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:06.161909    4248 logs.go:282] 0 containers: []
	W1212 21:35:06.161982    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:06.161982    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:06.161982    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:06.227303    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:06.227303    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:06.268038    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:06.268038    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:06.361371    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:06.349507    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.350379    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.353273    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.354715    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:06.355937    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:06.361371    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:06.361371    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:06.387773    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:06.387773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:08.952500    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:08.976516    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:09.008567    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.008567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:09.012505    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:09.041661    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.041661    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:09.045897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:09.072715    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.072715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:09.076471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:09.104975    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.104975    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:09.110239    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:09.137529    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.137529    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:09.142874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:09.172751    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.172856    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:09.176271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:09.208124    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.208124    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:09.211966    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:09.240860    4248 logs.go:282] 0 containers: []
	W1212 21:35:09.240922    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:09.240922    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:09.240922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:09.277501    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:09.277501    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:09.365788    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:09.355192    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.356293    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.357382    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.358419    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:09.359695    7909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:09.365788    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:09.365788    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:09.393564    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:09.393564    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:09.446625    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:09.446625    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.014046    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:12.039061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:12.073012    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.073012    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:12.076517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:12.106078    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.106078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:12.110106    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:12.148792    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.148792    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:12.153359    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:12.180485    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.180485    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:12.184489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:12.216517    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.216517    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:12.219938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:12.248201    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.248280    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:12.251955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:12.279303    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.279303    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:12.283688    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:12.311155    4248 logs.go:282] 0 containers: []
	W1212 21:35:12.311155    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:12.311240    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:12.311240    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:12.363330    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:12.363330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:12.424272    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:12.424272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:12.465092    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:12.465092    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:12.553464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:12.543149    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.544047    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.546957    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.548802    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:12.550266    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:12.553464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:12.553464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.089907    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:15.115463    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	W1212 21:35:16.227474   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:15.148783    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.148874    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:15.153957    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:15.184011    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.184098    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:15.189531    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:15.217178    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.217178    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:15.220770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:15.250751    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.250751    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:15.254712    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:15.285217    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.285217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:15.289685    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:15.318549    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.318549    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:15.322745    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:15.350449    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.350449    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:15.354843    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:15.387373    4248 logs.go:282] 0 containers: []
	W1212 21:35:15.387373    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:15.387373    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:15.387448    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:15.447923    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:15.447923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:15.486013    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:15.486013    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:15.578245    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:15.568067    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.569055    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.570286    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.571140    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:15.573311    8239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:15.578325    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:15.578352    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:15.605626    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:15.605626    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.166679    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:18.192323    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:18.225358    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.225358    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:18.228980    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:18.257552    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.257552    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:18.261101    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:18.289460    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.289460    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:18.294831    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:18.323718    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.323799    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:18.327263    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:18.355946    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.356051    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:18.360915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:18.389130    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.389212    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:18.392968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:18.421891    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.421979    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:18.425694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:18.454158    4248 logs.go:282] 0 containers: []
	W1212 21:35:18.454158    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:18.454158    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:18.454158    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:18.537920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:18.527608    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.528385    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.531174    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.532949    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:18.533980    8395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:18.537920    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:18.537920    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:18.567575    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:18.567575    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:18.620910    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:18.620945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:18.683030    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:18.683030    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.227570    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:21.256918    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:21.288783    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.288783    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:21.292853    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:21.323426    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.323426    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:21.326841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:21.358519    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.358519    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:21.364432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:21.396744    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.396829    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:21.400322    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:21.428939    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.428939    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:21.432771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:21.462453    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.462453    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:21.466020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:21.494057    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.494106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:21.497434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:21.528610    4248 logs.go:282] 0 containers: []
	W1212 21:35:21.528649    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:21.528649    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:21.528649    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:21.565220    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:21.565220    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:21.656317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:21.646689    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.647621    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.649439    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.651349    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:21.653188    8559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:21.656317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:21.656317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:21.685835    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:21.685835    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:21.735864    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:21.735864    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.301555    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:24.325946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:24.356277    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.356277    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:24.360286    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:24.388916    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.388916    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:24.392624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:24.420665    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.420696    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:24.424318    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:24.453296    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.453296    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:24.456761    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:24.485923    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.485923    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:24.489706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:24.521430    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.521430    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:24.525001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:24.553182    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.553182    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:24.557651    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:24.585999    4248 logs.go:282] 0 containers: []
	W1212 21:35:24.585999    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:24.585999    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:24.585999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:24.649025    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:24.649025    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:24.687813    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:24.687813    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:24.771442    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:24.762805    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.764135    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.765433    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.766914    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:24.768359    8726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:24.771442    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:24.771442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:24.798209    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:24.798236    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:26.266522   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:27.358394    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:27.382187    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:27.414963    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.414963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:27.418575    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:27.446874    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.446874    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:27.450946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:27.478168    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.478206    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:27.481426    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:27.510494    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.510494    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:27.514962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:27.544571    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.544571    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:27.548425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:27.577760    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.577760    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:27.583647    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:27.611248    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.611321    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:27.614827    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:27.643657    4248 logs.go:282] 0 containers: []
	W1212 21:35:27.643657    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:27.643657    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:27.643657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:27.727590    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:27.720083    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.721080    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.722381    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.723519    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:27.724748    8883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:27.727590    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:27.727590    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:27.758480    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:27.758480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:27.807919    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:27.807919    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:27.868159    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:27.868159    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:30.414374    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:30.439656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:30.469888    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.469958    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:30.473898    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:30.506297    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.506297    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:30.510214    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:30.545930    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.545982    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:30.549910    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:30.576713    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.576713    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:30.581000    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:30.611561    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.611561    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:30.615085    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:30.643517    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.643600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:30.647297    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:30.677589    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.677589    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:30.681477    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:30.712989    4248 logs.go:282] 0 containers: []
	W1212 21:35:30.713050    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:30.713050    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:30.713050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:30.800951    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:30.787443    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.789072    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.790773    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.791718    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:30.795269    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:30.800985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:30.801031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:30.827165    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:30.827165    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:30.877219    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:30.877219    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:30.939298    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:30.939298    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.484552    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:33.509270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:33.543536    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.543536    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:33.547226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:33.579403    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.579456    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:33.583656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:33.611689    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.611715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:33.615934    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:33.641875    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.641939    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:33.645973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:33.678622    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.678622    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:33.682927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:33.712281    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.712305    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:33.716240    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:33.744051    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.744127    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:33.747417    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:33.779521    4248 logs.go:282] 0 containers: []
	W1212 21:35:33.779591    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:33.779591    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:33.779591    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:33.840187    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:33.840187    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:33.882639    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:33.882639    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:33.969148    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:33.957711    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.958835    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.961164    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.962371    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:33.963325    9207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:33.969148    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:33.969148    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:33.997909    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:33.997909    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:36.305111   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:36.549852    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:36.575183    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:36.607678    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.607678    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:36.611597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:36.639236    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.639236    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:36.642949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:36.671624    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.671715    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:36.677217    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:36.704204    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.704254    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:36.707613    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:36.736929    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.736929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:36.741122    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:36.769406    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.769406    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:36.772706    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:36.799543    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.799620    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:36.803815    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:36.831079    4248 logs.go:282] 0 containers: []
	W1212 21:35:36.831159    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:36.831159    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:36.831200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:36.896378    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:36.896378    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:36.934866    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:36.934866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:37.023664    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:37.013641    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015155    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.015847    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018003    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:37.018979    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:37.023664    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:37.023664    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:37.053528    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:37.053528    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:39.635097    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:39.658801    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:39.689842    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.689897    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:39.693102    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:39.720497    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.720497    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:39.724029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:39.755115    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.755115    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:39.759351    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:39.788837    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.788837    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:39.795292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:39.820715    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.820715    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:39.824986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:39.851308    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.851308    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:39.855167    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:39.885106    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.885106    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:39.888522    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:39.919426    4248 logs.go:282] 0 containers: []
	W1212 21:35:39.919426    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:39.919426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:39.919426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:40.002497    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:39.992667    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.993466    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.995734    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.996894    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:39.998122    9521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:40.002497    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:40.002497    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:40.033288    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:40.033332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:40.080834    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:40.080834    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:40.164330    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:40.164330    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:42.710863    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:42.734865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:42.767778    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.767778    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:42.771433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:42.798128    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.798128    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:42.801849    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:42.831381    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.831381    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:42.834753    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:42.862923    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.862977    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:42.866577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:42.894694    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.894694    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:42.898324    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:42.927095    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.927169    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:42.930897    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:42.960437    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.960485    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:42.963899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:42.992769    4248 logs.go:282] 0 containers: []
	W1212 21:35:42.992769    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:42.992769    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:42.992769    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:43.076050    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:43.066448    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.067916    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.069057    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.070194    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:43.071270    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:43.076050    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:43.076050    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:43.117444    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:43.117444    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:43.163609    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:43.163675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:43.222686    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:43.222686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 21:35:46.341652   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:35:45.769012    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:45.792622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:45.828260    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.828300    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:45.832060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:45.860265    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.860345    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:45.864126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:45.892449    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.892449    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:45.895900    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:45.928107    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.928492    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:45.933848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:45.963204    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.963204    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:45.967539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:45.994068    4248 logs.go:282] 0 containers: []
	W1212 21:35:45.994068    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:45.997960    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:46.029774    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.029774    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:46.034005    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:46.064207    4248 logs.go:282] 0 containers: []
	W1212 21:35:46.064207    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:46.064275    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:46.064297    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:46.150334    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:46.151334    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:46.192422    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:46.192422    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:46.282161    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:46.268121    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.269865    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.274691    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.275494    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:46.278053    9859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:46.282161    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:46.282161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:46.308247    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:46.308247    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:48.880691    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:48.905168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:48.936143    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.936143    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:48.941055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:48.967633    4248 logs.go:282] 0 containers: []
	W1212 21:35:48.967681    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:48.972986    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:49.001908    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.001978    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:49.005690    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:49.033288    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.033288    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:49.037158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:49.068272    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.068272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:49.072674    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:49.118349    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.118385    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:49.123821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:49.152003    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.152003    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:49.155603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:49.184782    4248 logs.go:282] 0 containers: []
	W1212 21:35:49.184857    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:49.184857    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:49.184857    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:49.245561    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:49.245561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:49.286211    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:49.286211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:49.376977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:49.367834   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.369032   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.370233   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.372131   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:49.374116   10030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:49.376977    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:49.376977    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:49.403713    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:49.403713    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:51.956745    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:51.982481    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:52.016133    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.016133    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:52.021023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:52.049536    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.049536    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:52.053672    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:52.080846    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.080846    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:52.084803    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:52.113297    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.113338    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:52.116543    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:52.147940    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.147940    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:52.151545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:52.182320    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.182320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:52.186389    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:52.214710    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.214710    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:52.219053    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:52.245190    4248 logs.go:282] 0 containers: []
	W1212 21:35:52.245190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:52.245190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:52.245190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:52.298311    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:52.298311    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:52.366732    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:52.366732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:52.407792    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:52.407792    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:52.496661    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:52.484312   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.485150   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.488916   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.491754   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:52.493226   10208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:52.496661    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:52.496715    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:55.027190    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:55.056715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:55.092856    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.092927    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:55.096633    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:55.129725    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.129780    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:55.133503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:55.161685    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.161764    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:55.165325    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:55.194081    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.194081    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:55.197364    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:55.229572    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.229572    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:55.233158    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:55.260429    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.260429    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:55.264792    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:55.292582    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.292582    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:55.296385    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:55.324732    4248 logs.go:282] 0 containers: []
	W1212 21:35:55.324732    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:55.324732    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:55.324732    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:35:55.378532    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:55.378610    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:55.442350    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:55.442350    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:55.482305    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:55.482305    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:55.568490    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:55.558199   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.559481   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.560697   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.561968   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:55.565448   10364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:55.568490    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:55.568490    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.101014    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:35:58.125393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:35:58.155725    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.155725    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:35:58.159868    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:35:58.189119    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.189119    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:35:58.193157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:35:58.223237    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.223237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:35:58.226683    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:35:58.255695    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.255753    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:35:58.260433    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:35:58.288788    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.288859    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:35:58.292437    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:35:58.323598    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.323671    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:35:58.327478    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:35:58.354090    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.354178    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:35:58.357888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:35:58.386253    4248 logs.go:282] 0 containers: []
	W1212 21:35:58.386279    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:35:58.386314    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:35:58.386314    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:35:58.447141    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:35:58.447141    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:35:58.486943    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:35:58.486943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:35:58.568464    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:35:58.560119   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.561312   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.562297   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.564010   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:35:58.565939   10507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:35:58.568464    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:35:58.568464    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:35:58.595849    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:35:58.595886    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:35:56.384511   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:01.149596    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:01.175623    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:01.208519    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.208575    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:01.211916    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:01.245312    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.245312    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:01.249530    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:01.278392    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.278392    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:01.286444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:01.317355    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.317406    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:01.321724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:01.353217    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.353217    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:01.357807    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:01.391831    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.391831    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:01.395723    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:01.424095    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.424095    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:01.429268    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:01.462926    4248 logs.go:282] 0 containers: []
	W1212 21:36:01.462926    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:01.462926    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:01.462926    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:01.551074    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:01.539269   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.540233   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.541438   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.542446   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:01.543830   10660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:01.551074    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:01.551074    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:01.578369    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:01.578369    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:01.629021    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:01.629021    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:01.698809    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:01.698809    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.243670    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:04.267614    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:04.301625    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.301625    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:04.305862    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:04.333024    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.333024    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:04.336022    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:04.367192    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.367192    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:04.370605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:04.399595    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.399648    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:04.403374    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:04.432778    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.432778    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:04.436573    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:04.467412    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.467412    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:04.471329    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:04.498578    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.498578    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:04.502597    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:04.531764    4248 logs.go:282] 0 containers: []
	W1212 21:36:04.531784    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:04.531784    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:04.531843    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:04.570958    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:04.570958    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:04.661113    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:04.648460   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.649248   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.652578   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.653840   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:04.654704   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:04.661169    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:04.661169    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:04.690481    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:04.690542    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:04.741754    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:04.741754    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.308278    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:07.331410    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:07.365378    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.365378    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:07.369132    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:07.399048    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.399048    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:07.403054    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:07.435986    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.435986    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:07.440195    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:07.468277    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.468277    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:07.473061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:07.502665    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.502737    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:07.505870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:07.535294    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.535294    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:07.539205    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:07.568443    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.568443    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:07.571680    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:07.603485    4248 logs.go:282] 0 containers: []
	W1212 21:36:07.603485    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:07.603485    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:07.603485    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:07.654029    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:07.654069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:07.718737    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:07.718737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:07.758197    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:07.758197    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:07.848949    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:07.837788   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.838778   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.839873   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.841153   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:07.842068   11009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:07.848949    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:07.848949    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1212 21:36:06.422793   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	I1212 21:36:10.383554    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:10.406923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:10.436672    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.438058    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:10.441328    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:10.470329    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.470329    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:10.475355    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:10.503029    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.504040    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:10.508067    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:10.535078    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.535078    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:10.538911    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:10.572578    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.572578    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:10.576953    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:10.605274    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.605274    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:10.609586    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:10.636893    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.636893    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:10.640901    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:10.670634    4248 logs.go:282] 0 containers: []
	W1212 21:36:10.670634    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:10.670634    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:10.670634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:10.740301    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:10.740301    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:10.777927    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:10.777927    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:10.872052    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:10.861842   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.862596   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.865231   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.866657   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:10.867639   11156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:10.872052    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:10.872052    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:10.903069    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:10.903069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:13.460331    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:13.482121    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:13.513917    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.513943    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:13.517730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:13.551122    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.551122    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:13.554989    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:13.585497    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.585531    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:13.591062    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:13.617529    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.617529    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:13.620977    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:13.649520    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.649563    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:13.653279    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:13.680320    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.680320    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:13.684170    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:13.715222    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.715222    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:13.719105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:13.748387    4248 logs.go:282] 0 containers: []
	W1212 21:36:13.748387    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:13.748387    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:13.748387    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:13.813468    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:13.813468    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:13.854552    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:13.854552    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:13.940965    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:13.930876   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.931992   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.933250   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.934621   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:13.935762   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:13.940965    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:13.940965    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:13.967276    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:13.967276    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:16.517841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:16.542582    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:16.572379    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.572379    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:16.576052    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:16.603452    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.603544    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:16.607222    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:16.634623    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.634623    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:16.638649    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:16.669129    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.669129    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:16.673369    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:16.701294    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.701294    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:16.707834    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:16.734962    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.734962    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:16.739394    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:16.768315    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.768315    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:16.772559    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:16.801591    4248 logs.go:282] 0 containers: []
	W1212 21:36:16.801670    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:16.801687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:16.801687    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:16.866870    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:16.866870    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:16.906766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:16.906766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:16.998441    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:16.985782   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.986673   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.991870   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993024   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:16.993940   11487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:16.998441    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:16.998441    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:17.026255    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:17.026255    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:19.584671    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:19.610156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:19.644064    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.644129    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:19.648288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:19.678479    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.678479    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:19.682139    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:19.711766    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.711766    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:19.715209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:19.744913    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.744913    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:19.748961    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:19.780312    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.780312    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:19.784184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:19.812347    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.812347    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:19.816306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:19.844923    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.844923    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:19.848927    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:19.877442    4248 logs.go:282] 0 containers: []
	W1212 21:36:19.877442    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:19.877442    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:19.877442    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:19.942152    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:19.942152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:19.981218    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:19.981218    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:20.072288    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:20.063301   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.064454   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.065982   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.067322   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:20.068426   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:20.072288    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:20.072288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:20.099643    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:20.099643    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:36:16.463968   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): Get "https://127.0.0.1:62842/api/v1/nodes/no-preload-285600": EOF
	W1212 21:36:25.116556   13804 node_ready.go:55] error getting node "no-preload-285600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1212 21:36:25.116556   13804 node_ready.go:38] duration metric: took 6m0.0006991s for node "no-preload-285600" to be "Ready" ...
	I1212 21:36:22.659535    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:22.685256    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:22.716650    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.716701    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:22.720282    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:22.748382    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.748382    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:22.752865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:22.781255    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.781255    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:22.785427    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:22.817875    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.817875    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:22.822168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:22.850625    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.850625    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:22.854306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:22.882603    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.882665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:22.886850    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:22.915661    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.915661    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:22.919273    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:22.948188    4248 logs.go:282] 0 containers: []
	W1212 21:36:22.948219    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:22.948219    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:22.948272    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:23.001854    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:23.001854    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:23.062918    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:23.062918    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:23.102898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:23.102898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:23.188691    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:23.179388   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.180542   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.181616   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.183060   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:23.184504   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:23.188733    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:23.188773    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:25.120615   13804 out.go:203] 
	W1212 21:36:25.123657   13804 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:36:25.123657   13804 out.go:285] * 
	W1212 21:36:25.125621   13804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:36:25.128573   13804 out.go:203] 
	I1212 21:36:25.720984    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:25.747517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:25.789126    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.789126    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:25.792555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:25.825100    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.825100    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:25.829108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:25.859944    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.859944    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:25.862936    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:25.899027    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.899027    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:25.903029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:25.932069    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.932069    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:25.937652    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:25.970039    4248 logs.go:282] 0 containers: []
	W1212 21:36:25.970039    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:25.974772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:26.007166    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.007166    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:26.010547    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:26.043326    4248 logs.go:282] 0 containers: []
	W1212 21:36:26.043326    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:26.043380    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:26.043380    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:26.136579    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:26.129570   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.130713   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.131776   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133029   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:26.133944   11967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:26.136579    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:26.136579    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:26.164100    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:26.164100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:26.215761    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:26.215761    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:26.284627    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:26.284627    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:28.841950    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:28.867715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:28.905745    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.905745    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:28.908970    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:28.939518    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.939518    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:28.943636    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:28.973085    4248 logs.go:282] 0 containers: []
	W1212 21:36:28.973085    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:28.977068    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:29.006533    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.006533    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:29.011428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:29.051385    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.051385    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:29.055841    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:29.091342    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.091342    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:29.095332    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:29.123336    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.123336    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:29.126340    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:29.155367    4248 logs.go:282] 0 containers: []
	W1212 21:36:29.155367    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:29.155367    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:29.155367    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:29.207287    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:29.207287    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:29.272168    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:29.272168    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:29.312257    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:29.312257    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:29.391617    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:29.382784   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.383879   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.384928   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.386418   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:29.387786   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:29.391617    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:29.391617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:31.923841    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:31.950124    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:31.983967    4248 logs.go:282] 0 containers: []
	W1212 21:36:31.983967    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:31.987737    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:32.015027    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.015027    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:32.020109    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:32.055983    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.056068    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:32.059730    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:32.089140    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.089140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:32.094462    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:32.122929    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.122929    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:32.126837    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:32.156251    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.156251    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:32.160350    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:32.191862    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.191949    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:32.195885    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:32.223866    4248 logs.go:282] 0 containers: []
	W1212 21:36:32.223925    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:32.223925    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:32.223950    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:32.255049    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:32.255049    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:32.302818    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:32.302880    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:32.366288    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:32.366288    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:32.405752    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:32.405752    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:32.490704    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:32.476428   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.477300   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.482162   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.483107   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:32.484601   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:34.995924    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:35.024010    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:35.056509    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.056509    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:35.060912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:35.093115    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.093115    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:35.097758    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:35.128352    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.128352    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:35.132438    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:35.159545    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.159545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:35.163881    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:35.193455    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.193455    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:35.197292    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:35.225826    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.225826    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:35.230118    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:35.258718    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.258718    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:35.262754    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:35.289884    4248 logs.go:282] 0 containers: []
	W1212 21:36:35.289884    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:35.289884    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:35.289884    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:35.354177    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:35.354177    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:35.392766    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:35.393766    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:35.508577    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:35.495201   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.497790   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.499934   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.500799   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:35.503455   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:35.508577    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:35.508577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:35.536964    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:35.538023    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.113096    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:38.138012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:38.170611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.170611    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:38.174540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:38.203460    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.203460    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:38.209947    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:38.239843    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.239843    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:38.243116    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:38.271611    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.271611    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:38.275487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:38.305418    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.305450    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:38.309409    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:38.336902    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.336902    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:38.340380    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:38.367606    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.367606    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:38.373821    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:38.402583    4248 logs.go:282] 0 containers: []
	W1212 21:36:38.402583    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:38.402583    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:38.402583    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:38.438279    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:38.438279    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:38.525316    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:38.512227   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.513179   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.516702   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.518912   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:38.519829   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:38.525316    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:38.525316    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:38.552742    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:38.553263    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:38.623531    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:38.623531    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.192803    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:41.221527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:41.253765    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.253765    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:41.258162    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:41.286154    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.286154    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:41.290125    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:41.316985    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.316985    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:41.321219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:41.349797    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.349797    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:41.353105    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:41.383082    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.383082    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:41.386895    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:41.414456    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.414456    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:41.418483    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:41.449520    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.449577    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:41.453163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:41.486452    4248 logs.go:282] 0 containers: []
	W1212 21:36:41.486504    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:41.486504    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:41.486504    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:41.547617    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:41.547617    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:41.587426    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:41.587426    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:41.672162    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:41.660909   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.663125   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.664194   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.665601   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:41.666634   12792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:41.672162    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:41.672162    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:41.698838    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:41.698838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:44.254238    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:44.279639    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:44.313852    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.313852    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:44.317789    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:44.346488    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.346488    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:44.349923    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:44.379740    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.379774    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:44.383168    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:44.412140    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.412140    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:44.416191    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:44.460651    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.460681    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:44.465023    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:44.496502    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.496526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:44.500357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:44.532104    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.532155    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:44.536284    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:44.564677    4248 logs.go:282] 0 containers: []
	W1212 21:36:44.564677    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:44.564677    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:44.564768    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:44.642641    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:44.642641    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:44.681185    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:44.681185    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:44.775811    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:44.763716   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.764946   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.767080   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.768963   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:44.770287   12952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:44.775858    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:44.775858    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:44.802443    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:44.802443    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.355434    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:47.380861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:47.416615    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.416688    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:47.422899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:47.449927    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.449927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:47.453937    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:47.482382    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.482382    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:47.486265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:47.517752    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.517752    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:47.521863    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:47.553097    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.553097    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:47.557020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:47.586229    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.586229    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:47.590605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:47.629776    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.629776    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:47.633503    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:47.660408    4248 logs.go:282] 0 containers: []
	W1212 21:36:47.660408    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:47.660408    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:47.660408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:47.751292    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:47.741586   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.742768   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.744535   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.745790   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:47.746879   13105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:47.751292    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:47.751292    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:47.779192    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:47.779254    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:47.837296    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:47.837296    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:47.900027    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:47.900027    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.444550    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:50.467997    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:50.496690    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.496690    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:50.500967    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:50.526317    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.526317    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:50.530527    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:50.561433    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.561433    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:50.566001    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:50.618519    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.618519    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:50.622092    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:50.650073    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.650073    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:50.655016    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:50.683594    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.683623    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:50.687452    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:50.718509    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.718509    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:50.724946    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:50.757545    4248 logs.go:282] 0 containers: []
	W1212 21:36:50.757577    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:50.757618    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:50.757618    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:50.819457    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:50.819457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:50.858548    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:50.858548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:50.941749    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:50.931367   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.932776   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.933833   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.935487   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:50.938377   13281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:50.941749    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:50.941749    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:50.969772    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:50.969772    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:53.520939    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:53.549491    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:53.583344    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.583344    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:53.588894    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:53.618751    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.618751    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:53.623090    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:53.650283    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.650283    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:53.656108    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:53.682662    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.682727    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:53.686551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:53.713705    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.713705    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:53.717716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:53.744792    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.744792    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:53.749211    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:53.779976    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.779976    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:53.783888    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:53.815109    4248 logs.go:282] 0 containers: []
	W1212 21:36:53.815109    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:53.815109    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:53.815109    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:53.876921    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:53.876921    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:53.916304    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:53.916304    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:54.003977    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:53.994198   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.995584   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.997212   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.998274   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:53.999401   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:54.004510    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:54.004510    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:54.033807    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:54.033807    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:56.586896    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:56.610373    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:56.643875    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.643875    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:56.648210    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:56.679979    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.679979    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:56.684252    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:56.712701    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.712745    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:56.716425    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:56.746231    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.746231    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:56.750051    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:56.778902    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.778902    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:56.784361    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:56.813624    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.813624    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:56.817949    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:56.846221    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.846221    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:56.849772    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:56.880299    4248 logs.go:282] 0 containers: []
	W1212 21:36:56.880299    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:56.880299    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:56.880299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:36:56.945090    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:36:56.946089    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:36:56.985505    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:36:56.985505    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:36:57.077375    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:36:57.068729   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.069760   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.070813   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.071834   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:36:57.072749   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:36:57.077375    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:36:57.077375    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:36:57.103533    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:36:57.103533    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:36:59.659092    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:36:59.684113    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:36:59.716016    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.716040    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:36:59.719576    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:36:59.749209    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.749209    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:36:59.752876    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:36:59.781442    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.781442    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:36:59.785342    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:36:59.814766    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.814766    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:36:59.818786    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:36:59.846373    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.846373    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:36:59.849782    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:36:59.877994    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.877994    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:36:59.881893    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:36:59.910479    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.910479    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:36:59.914372    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:36:59.946561    4248 logs.go:282] 0 containers: []
	W1212 21:36:59.946561    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:36:59.946561    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:36:59.946561    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:00.008124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:00.008124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:00.047147    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:00.047147    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:00.137432    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:00.126870   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.127736   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.130207   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.131199   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:00.132348   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:00.137480    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:00.137480    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:00.167211    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:00.167211    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:02.725601    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:02.750880    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:02.781655    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.781720    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:02.785930    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:02.814342    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.815352    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:02.819060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:02.848212    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.848212    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:02.852622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:02.879034    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.879034    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:02.883002    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:02.914061    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.914061    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:02.918271    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:02.946216    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.946289    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:02.949752    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:02.979537    4248 logs.go:282] 0 containers: []
	W1212 21:37:02.979570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:02.983289    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:03.012201    4248 logs.go:282] 0 containers: []
	W1212 21:37:03.012201    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:03.012201    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:03.012201    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:03.098494    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:03.086265   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.087072   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.089538   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.090486   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:03.092996   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:03.098494    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:03.098494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:03.124942    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:03.124942    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:03.172838    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:03.172838    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:03.233652    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:03.233652    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:05.778260    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:05.806049    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:05.834569    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.834569    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:05.838184    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:05.871331    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.871331    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:05.874924    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:05.904108    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.904108    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:05.907882    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:05.941911    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.941911    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:05.945711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:05.978806    4248 logs.go:282] 0 containers: []
	W1212 21:37:05.978845    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:05.983103    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:06.010395    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.010395    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:06.015899    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:06.043426    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.043475    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:06.047525    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:06.075777    4248 logs.go:282] 0 containers: []
	W1212 21:37:06.075777    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:06.075777    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:06.075777    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:06.140912    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:06.140912    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:06.180839    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:06.180839    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:06.273920    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:06.262433   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.263564   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.264506   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.265910   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:06.267120   14082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:06.273941    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:06.273941    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:06.301408    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:06.301408    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:08.853362    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:08.880482    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:08.912285    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.912285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:08.915914    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:08.945359    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.945359    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:08.951021    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:08.978398    4248 logs.go:282] 0 containers: []
	W1212 21:37:08.978398    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:08.981959    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:09.013763    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.013763    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:09.017724    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:09.045423    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.045423    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:09.049596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:09.077554    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.077554    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:09.081163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:09.108945    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.109001    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:09.112577    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:09.141679    4248 logs.go:282] 0 containers: []
	W1212 21:37:09.141740    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:09.141765    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:09.141765    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:09.207494    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:09.208014    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:09.275675    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:09.275675    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:09.320177    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:09.320252    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:09.418820    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:09.405124   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.406042   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.410434   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.411496   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:09.412429   14269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:09.418849    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:09.418849    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:11.950067    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:11.974163    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:12.007025    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.007025    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:12.010964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:12.042863    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.042863    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:12.046143    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:12.076655    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.076726    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:12.080236    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:12.107161    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.107161    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:12.113344    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:12.142179    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.142272    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:12.146446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:12.176797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.176797    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:12.180681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:12.209797    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.209797    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:12.213605    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:12.244494    4248 logs.go:282] 0 containers: []
	W1212 21:37:12.244494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:12.244494    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:12.244494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:12.332970    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:12.322197   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.323340   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.325130   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.326221   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:12.328022   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:12.332970    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:12.332970    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:12.362486    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:12.363006    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:12.407548    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:12.407548    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:12.469640    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:12.469640    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.019141    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:15.042869    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:15.073404    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.073404    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:15.076962    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:15.105390    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.105390    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:15.109785    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:15.143740    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.143775    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:15.147734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:15.174650    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.174711    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:15.178235    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:15.207870    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.207870    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:15.212288    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:15.248454    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.248454    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:15.253060    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:15.282067    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.282067    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:15.285778    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:15.317032    4248 logs.go:282] 0 containers: []
	W1212 21:37:15.317032    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:15.317032    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:15.317032    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:15.350767    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:15.350767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:15.408508    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:15.408508    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:15.471124    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:15.471124    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:15.511541    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:15.511541    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:15.597230    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:15.586821   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.588485   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.590856   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.591864   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:15.593117   14592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.103161    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:18.132020    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:18.167621    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.167621    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:18.171555    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:18.197535    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.197535    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:18.201484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:18.231207    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.231237    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:18.234569    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:18.262608    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.262608    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:18.266310    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:18.291496    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.291496    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:18.296129    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:18.323567    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.323567    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:18.328112    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:18.363055    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.363055    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:18.368448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:18.398543    4248 logs.go:282] 0 containers: []
	W1212 21:37:18.398543    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:18.398543    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:18.398543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:18.451687    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:18.451738    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:18.512324    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:18.512324    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:18.553614    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:18.553614    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:18.644707    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:18.634792   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.635628   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.638131   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.639176   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:18.640339   14750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:18.644734    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:18.644779    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.175562    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:21.201442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:21.233480    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.233480    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:21.237891    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:21.267032    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.267032    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:21.273539    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:21.301291    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.301291    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:21.304622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:21.333953    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.333953    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:21.336973    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:21.366442    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.366442    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:21.370770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:21.401250    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.401326    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:21.406507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:21.434989    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.434989    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:21.438536    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:21.468847    4248 logs.go:282] 0 containers: []
	W1212 21:37:21.468895    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:21.468895    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:21.468937    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:21.506543    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:21.506543    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:21.592900    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:21.582025   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.584472   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.585983   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.587799   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:21.588955   14890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:21.592928    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:21.592980    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:21.624073    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:21.624114    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:21.675642    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:21.675642    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.243223    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:24.272878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:24.306285    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.306285    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:24.310609    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:24.340982    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.340982    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:24.344434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:24.371790    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.371790    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:24.376448    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:24.403045    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.403045    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:24.406643    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:24.436352    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.436352    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:24.440299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:24.472033    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.472033    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:24.476007    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:24.508554    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.508554    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:24.512161    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:24.542727    4248 logs.go:282] 0 containers: []
	W1212 21:37:24.542727    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:24.542727    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:24.542727    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:24.570829    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:24.570829    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:24.618660    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:24.618660    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:24.682106    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:24.682106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:24.721952    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:24.721952    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:24.799468    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:24.791295   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.792228   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.793454   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.794593   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:24.795784   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.305001    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:27.330707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:27.365828    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.365828    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:27.370558    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:27.396820    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.396820    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:27.401269    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:27.430536    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.430536    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:27.434026    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:27.462920    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.462920    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:27.466302    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:27.494753    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.494753    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:27.498776    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:27.526827    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.526827    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:27.530938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:27.558811    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.558811    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:27.562896    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:27.593235    4248 logs.go:282] 0 containers: []
	W1212 21:37:27.593235    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:27.593235    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:27.593235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:27.645061    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:27.645061    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:27.708198    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:27.708198    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:27.746161    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:27.746161    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:27.834200    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:27.822744   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.823517   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828076   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.828851   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:27.830876   15228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:27.834200    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:27.834200    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.365194    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:30.390907    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:30.422859    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.422859    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:30.426658    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:30.458081    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.458081    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:30.462130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:30.492792    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.492838    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:30.496517    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:30.535575    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.535575    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:30.539664    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:30.570934    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.570934    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:30.575357    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:30.606013    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.606013    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:30.610553    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:30.637448    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.637448    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:30.640965    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:30.670791    4248 logs.go:282] 0 containers: []
	W1212 21:37:30.670866    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:30.670866    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:30.670866    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:30.701120    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:30.701120    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:30.751223    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:30.751223    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:30.813495    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:30.813495    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:30.853428    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:30.853428    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:30.937812    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:30.926651   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.927983   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.930891   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.931995   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:30.933300   15394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.442840    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:33.471704    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:33.504567    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.504567    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:33.508564    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:33.540112    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.540147    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:33.544036    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:33.572905    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.572905    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:33.576956    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:33.606272    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.606334    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:33.610145    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:33.637137    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.637137    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:33.641246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:33.670136    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.670136    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:33.673715    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:33.701659    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.701659    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:33.705326    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:33.736499    4248 logs.go:282] 0 containers: []
	W1212 21:37:33.736585    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:33.736585    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:33.736585    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:33.802820    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:33.802820    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:33.841898    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:33.841898    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:33.928502    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:33.917072   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.918297   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.919525   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.922045   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:33.923688   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:33.928502    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:33.928502    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:33.954803    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:33.954803    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:36.508990    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:36.532529    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:36.565107    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.565107    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:36.569219    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:36.599219    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.599219    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:36.604130    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:36.641323    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.641399    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:36.644874    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:36.678077    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.678077    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:36.681676    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:36.717361    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.717361    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:36.720484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:36.758068    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.758131    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:36.761928    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:36.788886    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.788886    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:36.792763    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:36.822518    4248 logs.go:282] 0 containers: []
	W1212 21:37:36.822518    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:36.822518    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:36.822594    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:36.886902    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:36.886902    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:36.926353    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:36.926353    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:37.017351    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:37.005572   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.006475   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.009602   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.011562   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:37.013067   15695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:37.017351    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:37.017351    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:37.043945    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:37.043945    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:39.613292    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:39.638402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:39.668963    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.668963    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:39.674050    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:39.706941    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.706993    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:39.711641    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:39.743407    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.743407    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:39.748540    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:39.776567    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.776567    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:39.780756    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:39.809769    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.809769    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:39.814028    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:39.841619    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.841619    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:39.845432    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:39.872294    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.872294    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:39.876039    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:39.906559    4248 logs.go:282] 0 containers: []
	W1212 21:37:39.906559    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:39.906559    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:39.906559    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:39.971123    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:39.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:40.010767    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:40.010767    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:40.121979    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:40.111150   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.112155   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.113799   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.114617   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:40.117879   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:40.121979    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:40.121979    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:40.153150    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:40.153150    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:42.714553    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:42.739259    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:42.773825    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.773825    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:42.777653    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:42.806593    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.806617    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:42.811305    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:42.839804    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.839804    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:42.843545    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:42.871645    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.871645    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:42.877455    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:42.907575    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.907674    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:42.911474    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:42.947872    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.947872    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:42.951182    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:42.981899    4248 logs.go:282] 0 containers: []
	W1212 21:37:42.981899    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:42.985358    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:43.015278    4248 logs.go:282] 0 containers: []
	W1212 21:37:43.015278    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:43.015278    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:43.015278    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:43.083520    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:43.083520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:43.124100    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:43.124100    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:43.208232    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:43.199122   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.200440   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.201500   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.203019   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:43.204672   16036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:43.208232    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:43.208232    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:43.234266    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:43.234266    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:45.791967    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:45.818451    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:45.851045    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.851045    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:45.854848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:45.880205    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.880205    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:45.883681    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:45.910629    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.910629    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:45.914618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:45.944467    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.944467    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:45.948393    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:45.979772    4248 logs.go:282] 0 containers: []
	W1212 21:37:45.979772    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:45.983154    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:46.011861    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.011947    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:46.016147    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:46.043151    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.043151    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:46.048940    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:46.101712    4248 logs.go:282] 0 containers: []
	W1212 21:37:46.101712    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:46.101712    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:46.101712    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:46.165060    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:46.165060    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:46.204152    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:46.204152    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:46.295737    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:46.284405   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.285323   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.289255   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.290702   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:46.291840   16195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:46.295737    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:46.295737    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:46.323140    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:46.323657    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:48.876615    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:48.902293    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:48.935424    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.935424    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:48.939391    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:48.966927    4248 logs.go:282] 0 containers: []
	W1212 21:37:48.966927    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:48.970734    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:49.001644    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.001644    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:49.005407    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:49.035360    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.035360    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:49.042740    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:49.074356    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.074356    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:49.078793    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:49.110567    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.110625    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:49.114551    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:49.145236    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.145236    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:49.149599    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:49.177230    4248 logs.go:282] 0 containers: []
	W1212 21:37:49.177230    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:49.177230    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:49.177230    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:49.240142    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:49.240142    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:49.278723    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:49.278723    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:49.367647    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:49.358947   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.359901   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.361621   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.363095   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:49.364121   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:49.367647    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:49.367647    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:49.397635    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:49.397635    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:51.962408    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:51.992442    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:52.024460    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.024460    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:52.028629    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:52.060221    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.060221    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:52.064265    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:52.104649    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.104649    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:52.109138    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:52.140487    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.140545    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:52.144120    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:52.172932    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.172932    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:52.176618    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:52.206650    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.206650    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:52.210399    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:52.236993    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.236993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:52.240861    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:52.270655    4248 logs.go:282] 0 containers: []
	W1212 21:37:52.270655    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:52.270655    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:52.270655    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:52.335104    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:52.335104    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:52.370957    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:52.371840    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:52.457985    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:52.448019   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.449089   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.450136   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.451710   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:52.452676   16515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:52.457985    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:52.457985    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:52.486332    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:52.486332    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:55.041298    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:55.065637    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:55.094280    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.094280    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:55.097903    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:55.126902    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.126902    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:55.130716    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:55.159228    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.159228    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:55.163220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:55.192251    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.192251    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:55.195844    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:55.221302    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.221342    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:55.224818    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:55.251600    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.251600    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:55.258126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:55.288004    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.288004    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:55.292538    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:55.321503    4248 logs.go:282] 0 containers: []
	W1212 21:37:55.321503    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:55.321503    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:55.321503    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:55.382091    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:55.382091    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:55.417183    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:55.417183    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:55.505809    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:55.497729   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.498853   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.500068   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.501049   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:55.502394   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:55.505857    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:55.505922    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:55.533563    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:55.533563    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:37:58.084879    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:37:58.108938    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:37:58.141011    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.141011    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:37:58.144507    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:37:58.173301    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.173301    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:37:58.177012    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:37:58.205946    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.205946    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:37:58.209603    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:37:58.239537    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.239626    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:37:58.243771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:37:58.274180    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.274180    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:37:58.278119    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:37:58.306549    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.306589    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:37:58.310707    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:37:58.341993    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.341993    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:37:58.345805    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:37:58.374110    4248 logs.go:282] 0 containers: []
	W1212 21:37:58.374110    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:37:58.374110    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:37:58.374110    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:37:58.438540    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:37:58.438540    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:37:58.479144    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:37:58.479144    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:37:58.563382    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:37:58.555856   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.556864   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.558351   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.559659   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:37:58.561038   16836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:37:58.563382    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:37:58.563382    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:37:58.590030    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:37:58.591001    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:01.143523    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:01.166879    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:01.204311    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.204311    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:01.208667    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:01.236959    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.236959    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:01.241497    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:01.268362    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.268362    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:01.272390    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:01.301769    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.301769    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:01.306386    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:01.334250    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.334250    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:01.338080    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:01.367719    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.367719    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:01.371554    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:01.400912    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.400912    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:01.405087    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:01.433025    4248 logs.go:282] 0 containers: []
	W1212 21:38:01.433079    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:01.433112    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:01.433140    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:01.498716    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:01.498716    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:01.537789    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:01.537789    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:01.621520    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:01.609272   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.610819   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.612400   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.614811   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:01.616071   16988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:01.621520    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:01.621520    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:01.651241    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:01.651241    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.202726    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:04.233568    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:04.264266    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.264266    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:04.268731    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:04.299179    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.299179    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:04.304521    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:04.333532    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.333532    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:04.337480    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:04.370718    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.370774    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:04.374487    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:04.404113    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.404113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:04.407484    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:04.439641    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.439641    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:04.442993    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:04.473704    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.473745    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:04.478029    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:04.506810    4248 logs.go:282] 0 containers: []
	W1212 21:38:04.506810    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:04.506810    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:04.506810    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:04.536546    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:04.536546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:04.595827    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:04.595827    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:04.655750    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:04.655750    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:04.693978    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:04.693978    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:04.780038    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:04.769629   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.770581   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.771942   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.773186   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:04.774761   17166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.285343    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:07.309791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:07.342594    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.342658    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:07.346771    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:07.375078    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.375078    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:07.378622    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:07.406406    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.406406    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:07.409700    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:07.439671    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.439702    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:07.443226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:07.474113    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.474113    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:07.478278    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:07.506266    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.506266    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:07.511246    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:07.539784    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.539813    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:07.543598    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:07.571190    4248 logs.go:282] 0 containers: []
	W1212 21:38:07.571190    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:07.571190    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:07.571190    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:07.621969    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:07.621969    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:07.686280    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:07.686280    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:07.729355    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:07.729355    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:07.818055    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:07.806835   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.807966   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.809280   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.811568   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:07.812813   17338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:07.818055    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:07.818055    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.353048    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:10.380806    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:10.411111    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.411111    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:10.417906    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:10.445879    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.445879    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:10.449270    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:10.478782    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.478782    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:10.482418    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:10.514768    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.514768    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:10.518402    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:10.549807    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.549841    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:10.553625    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:10.584420    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.584420    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:10.590061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:10.617570    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.617570    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:10.621915    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:10.650697    4248 logs.go:282] 0 containers: []
	W1212 21:38:10.650697    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:10.650697    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:10.650697    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:10.688035    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:10.688035    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:10.779967    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:10.768220   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.770447   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.773427   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.775250   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:10.776328   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:10.779967    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:10.779967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:10.808999    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:10.808999    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:10.857901    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:10.857901    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.426838    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:13.455711    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:13.487399    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.487399    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:13.491220    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:13.521694    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.521694    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:13.525468    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:13.554648    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.554648    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:13.559306    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:13.587335    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.587335    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:13.591025    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:13.619654    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.619654    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:13.623563    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:13.653939    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.653939    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:13.657955    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:13.687366    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.687396    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:13.690775    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:13.722113    4248 logs.go:282] 0 containers: []
	W1212 21:38:13.722193    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:13.722231    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:13.722231    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:13.810317    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:13.799024   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.800050   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.801497   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.802454   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:13.803945   17633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:13.810317    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:13.810317    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:13.838155    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:13.838155    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:13.883053    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:13.883053    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:13.946291    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:13.946291    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:16.490914    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:16.517055    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:16.546289    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.546289    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:16.549648    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:16.579266    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.579266    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:16.583479    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:16.622750    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.622824    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:16.625968    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:16.653518    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.653558    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:16.657430    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:16.684716    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.684716    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:16.688471    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:16.715508    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.715508    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:16.720093    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:16.747105    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.747105    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:16.751009    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:16.778855    4248 logs.go:282] 0 containers: []
	W1212 21:38:16.778889    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:16.778935    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:16.778935    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:16.866923    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:16.857374   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.858226   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.860682   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.861845   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:16.863178   17808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:16.866923    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:16.866923    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:16.893634    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:16.893634    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:16.947106    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:16.947106    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:17.009695    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:17.009695    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:19.555421    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:19.585126    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:19.618491    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.618491    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:19.621943    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:19.649934    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.649934    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:19.654446    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:19.682441    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.682441    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:19.686687    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:19.713873    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.713873    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:19.718086    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:19.746901    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.746901    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:19.751802    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:19.780998    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.780998    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:19.785656    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:19.814435    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.814435    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:19.818376    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:19.842539    4248 logs.go:282] 0 containers: []
	W1212 21:38:19.842539    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:19.842539    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:19.842539    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:19.931943    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:19.922035   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.923477   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.924372   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.926985   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:19.927824   17971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:19.931943    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:19.931943    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:19.962377    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:19.962377    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:20.016397    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:20.016397    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:20.080069    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:20.080069    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:22.623830    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:22.648339    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:22.676455    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.676455    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:22.680434    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:22.707663    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.707663    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:22.711156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:22.740689    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.740689    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:22.747514    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:22.774589    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.774589    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:22.778733    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:22.809957    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.810016    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:22.814216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:22.843548    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.843548    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:22.848917    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:22.881212    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.881212    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:22.885127    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:22.912249    4248 logs.go:282] 0 containers: []
	W1212 21:38:22.912249    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:22.912249    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:22.912249    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:22.971764    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:22.971764    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:23.012466    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:23.012466    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:23.098040    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:23.088804   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.090233   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092125   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.092902   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:23.095639   18140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:23.098040    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:23.098040    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:23.125246    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:23.125299    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:25.680678    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:25.710865    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:25.744205    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.744205    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:25.748694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:25.775965    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.775965    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:25.780266    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:25.809226    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.809226    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:25.813428    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:25.843074    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.843074    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:25.847624    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:25.875245    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.875307    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:25.878757    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:25.909526    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.909526    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:25.913226    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:25.940382    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.940382    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:25.945238    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:25.971090    4248 logs.go:282] 0 containers: []
	W1212 21:38:25.971123    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:25.971123    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:25.971123    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:26.056782    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:26.046652   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.047515   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.050195   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.051210   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:26.054000   18296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:26.056824    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:26.056824    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:26.088188    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:26.088188    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:26.134947    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:26.134990    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:26.195007    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:26.195007    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:28.743432    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:28.770616    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:28.803520    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.803520    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:28.810180    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:28.835854    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.835854    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:28.839216    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:28.867332    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.867332    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:28.871770    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:28.898967    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.899021    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:28.902579    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:28.930727    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.930781    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:28.934892    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:28.965429    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.965484    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:28.968912    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:28.994989    4248 logs.go:282] 0 containers: []
	W1212 21:38:28.995086    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:28.998524    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:29.029494    4248 logs.go:282] 0 containers: []
	W1212 21:38:29.029494    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:29.029494    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:29.029494    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:29.084546    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:29.084546    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:29.146031    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:29.146031    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:29.185235    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:29.185235    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:29.276958    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:29.265529   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.266811   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.267596   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.272121   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:29.273001   18478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:29.277002    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:29.277048    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:31.813255    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:31.837157    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:31.867469    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.867532    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:31.871061    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:31.899568    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.899568    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:31.903533    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:31.932812    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.932812    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:31.937348    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:31.968624    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.968624    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:31.972596    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:31.999542    4248 logs.go:282] 0 containers: []
	W1212 21:38:31.999542    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:32.004209    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:32.034665    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.034665    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:32.038848    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:32.068480    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.068480    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:32.073156    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:32.104268    4248 logs.go:282] 0 containers: []
	W1212 21:38:32.104268    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:32.104268    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:32.104268    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:32.168878    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:32.168878    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:32.209739    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:32.209739    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:32.299388    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:32.287377   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.288363   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.290983   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.292213   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:32.293196   18622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:32.299388    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:32.299388    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:32.326590    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:32.327171    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:34.882209    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:34.906646    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:34.937770    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.937770    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:34.941176    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:34.970749    4248 logs.go:282] 0 containers: []
	W1212 21:38:34.970749    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:34.974824    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:35.003731    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.003731    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:35.011153    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:35.043865    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.043865    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:35.047948    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:35.079197    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.079197    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:35.084870    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:35.111591    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.111645    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:35.115847    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:35.144310    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.144310    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:35.148221    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:35.176803    4248 logs.go:282] 0 containers: []
	W1212 21:38:35.176833    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:35.176833    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:35.176833    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:35.236846    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:35.236846    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:35.284685    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:35.284685    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:35.374702    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:35.363981   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.364969   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.367167   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.368565   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:35.369594   18782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:35.374702    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:35.374702    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:35.402523    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:35.402584    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:37.960369    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:37.991489    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:38.021000    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.021059    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:38.024791    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:38.056577    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.056577    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:38.061074    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:38.091553    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.091619    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:38.095584    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:38.124245    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.124245    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:38.127814    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:38.156149    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.156149    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:38.159694    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:38.191453    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.191475    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:38.195307    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:38.226021    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.226046    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:38.229445    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:38.258701    4248 logs.go:282] 0 containers: []
	W1212 21:38:38.258701    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:38.258701    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:38.258701    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:38.324178    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:38.324178    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:38.363665    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:38.363665    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:38.454082    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:38.443711   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.444603   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.447135   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.448189   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:38.449070   18943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:38.454082    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:38.454082    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:38.481686    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:38.481686    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.036796    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:41.064580    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 21:38:41.096576    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.096636    4248 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:38:41.100082    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 21:38:41.131382    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.131439    4248 logs.go:284] No container was found matching "etcd"
	I1212 21:38:41.135017    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 21:38:41.164298    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.164360    4248 logs.go:284] No container was found matching "coredns"
	I1212 21:38:41.167964    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 21:38:41.198065    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.198065    4248 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:38:41.202878    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 21:38:41.230510    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.230510    4248 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:38:41.234299    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 21:38:41.263767    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.263767    4248 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:38:41.267078    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 21:38:41.296096    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.296096    4248 logs.go:284] No container was found matching "kindnet"
	I1212 21:38:41.299444    4248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1212 21:38:41.332967    4248 logs.go:282] 0 containers: []
	W1212 21:38:41.332967    4248 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 21:38:41.332967    4248 logs.go:123] Gathering logs for container status ...
	I1212 21:38:41.332967    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:38:41.380925    4248 logs.go:123] Gathering logs for kubelet ...
	I1212 21:38:41.380925    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:38:41.445577    4248 logs.go:123] Gathering logs for dmesg ...
	I1212 21:38:41.445577    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:38:41.484612    4248 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:38:41.484612    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:38:41.569457    4248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 21:38:41.558338   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.559501   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561053   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.561965   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:38:41.565400   19124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:38:41.569457    4248 logs.go:123] Gathering logs for Docker ...
	I1212 21:38:41.569457    4248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 21:38:44.125865    4248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:38:44.149891    4248 out.go:203] 
	W1212 21:38:44.151830    4248 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 21:38:44.151830    4248 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 21:38:44.152349    4248 out.go:285] * Related issues:
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 21:38:44.152403    4248 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 21:38:44.154560    4248 out.go:203] 
	
	
	==> Docker <==
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732391828Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732480039Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732490940Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732497041Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732552048Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732584552Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.732619056Z" level=info msg="Initializing buildkit"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.834443812Z" level=info msg="Completed buildkit initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839552952Z" level=info msg="Daemon has completed initialization"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839689269Z" level=info msg="API listen on /run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839754977Z" level=info msg="API listen on [::]:2376"
	Dec 12 21:30:21 no-preload-285600 dockerd[929]: time="2025-12-12T21:30:21.839713872Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 21:30:21 no-preload-285600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start docker client with request timeout 0s"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Loaded network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 12 21:30:22 no-preload-285600 cri-dockerd[1225]: time="2025-12-12T21:30:22Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 12 21:30:22 no-preload-285600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 21:49:15.478520   20650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:49:15.479397   20650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:49:15.483805   20650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:49:15.484948   20650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 21:49:15.486111   20650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.817259] CPU: 7 PID: 461935 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4cc709eb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4cc709eaf6.
	[  +0.000001] RSP: 002b:00007ffc97ee3b30 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.851832] CPU: 4 PID: 462074 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8ee5e9fb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8ee5e9faf6.
	[  +0.000001] RSP: 002b:00007ffc84e853d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 21:49:15 up  2:50,  0 user,  load average: 0.47, 0.45, 1.32
	Linux no-preload-285600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:49:12 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:49:12 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1505.
	Dec 12 21:49:12 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:12 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:13 no-preload-285600 kubelet[20499]: E1212 21:49:13.007475   20499 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:49:13 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:49:13 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:49:13 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1506.
	Dec 12 21:49:13 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:13 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:13 no-preload-285600 kubelet[20519]: E1212 21:49:13.751397   20519 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:49:13 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:49:13 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:49:14 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1507.
	Dec 12 21:49:14 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:14 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:14 no-preload-285600 kubelet[20546]: E1212 21:49:14.497868   20546 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:49:14 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:49:14 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:49:15 no-preload-285600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1508.
	Dec 12 21:49:15 no-preload-285600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:15 no-preload-285600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:49:15 no-preload-285600 kubelet[20655]: E1212 21:49:15.242097   20655 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:49:15 no-preload-285600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:49:15 no-preload-285600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-285600 -n no-preload-285600: exit status 2 (589.8838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-285600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (223.45s)

                                                
                                    

Test pass (358/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.28
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.24
9 TestDownloadOnly/v1.28.0/DeleteAll 1.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.81
12 TestDownloadOnly/v1.34.2/json-events 7.48
13 TestDownloadOnly/v1.34.2/preload-exists 0
16 TestDownloadOnly/v1.34.2/kubectl 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.51
18 TestDownloadOnly/v1.34.2/DeleteAll 0.69
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.67
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.86
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.26
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.89
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.69
29 TestDownloadOnlyKic 1.53
30 TestBinaryMirror 2.54
31 TestOffline 145.73
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 326.51
38 TestAddons/serial/Volcano 51.04
40 TestAddons/serial/GCPAuth/Namespaces 0.24
41 TestAddons/serial/GCPAuth/FakeCredentials 9.13
45 TestAddons/parallel/RegistryCreds 1.42
47 TestAddons/parallel/InspektorGadget 12.14
48 TestAddons/parallel/MetricsServer 9.45
50 TestAddons/parallel/CSI 61.33
51 TestAddons/parallel/Headlamp 31.43
52 TestAddons/parallel/CloudSpanner 7.4
53 TestAddons/parallel/LocalPath 21.99
54 TestAddons/parallel/NvidiaDevicePlugin 7.97
55 TestAddons/parallel/Yakd 11.19
56 TestAddons/parallel/AmdGpuDevicePlugin 7.61
57 TestAddons/StoppedEnableDisable 13.03
58 TestCertOptions 54.5
59 TestCertExpiration 266.27
60 TestDockerFlags 58.35
61 TestForceSystemdFlag 107.53
62 TestForceSystemdEnv 52.49
68 TestErrorSpam/start 2.58
69 TestErrorSpam/status 2.08
70 TestErrorSpam/pause 2.56
71 TestErrorSpam/unpause 2.62
72 TestErrorSpam/stop 18.34
75 TestFunctional/serial/CopySyncFile 0.04
76 TestFunctional/serial/StartWithProxy 78.24
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 47.46
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.25
83 TestFunctional/serial/CacheCmd/cache/add_remote 9.98
84 TestFunctional/serial/CacheCmd/cache/add_local 4.21
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
86 TestFunctional/serial/CacheCmd/cache/list 0.19
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.59
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.45
89 TestFunctional/serial/CacheCmd/cache/delete 0.37
90 TestFunctional/serial/MinikubeKubectlCmd 0.37
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.21
92 TestFunctional/serial/ExtraConfig 43.3
93 TestFunctional/serial/ComponentHealth 0.13
94 TestFunctional/serial/LogsCmd 1.77
95 TestFunctional/serial/LogsFileCmd 1.89
96 TestFunctional/serial/InvalidService 5.48
98 TestFunctional/parallel/ConfigCmd 1.15
100 TestFunctional/parallel/DryRun 1.59
101 TestFunctional/parallel/InternationalLanguage 0.62
102 TestFunctional/parallel/StatusCmd 1.96
107 TestFunctional/parallel/AddonsCmd 0.49
108 TestFunctional/parallel/PersistentVolumeClaim 64.83
110 TestFunctional/parallel/SSHCmd 1.49
111 TestFunctional/parallel/CpCmd 4.39
112 TestFunctional/parallel/MySQL 82.74
113 TestFunctional/parallel/FileSync 0.58
114 TestFunctional/parallel/CertSync 3.59
118 TestFunctional/parallel/NodeLabels 0.14
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
122 TestFunctional/parallel/License 1.57
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.44
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.44
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.45
130 TestFunctional/parallel/ImageCommands/ImageBuild 4.88
131 TestFunctional/parallel/ImageCommands/Setup 1.78
132 TestFunctional/parallel/Version/short 0.17
133 TestFunctional/parallel/Version/components 0.89
134 TestFunctional/parallel/DockerEnv/powershell 6.08
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.57
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.29
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.95
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 54.53
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.77
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.08
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.21
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.74
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
153 TestFunctional/parallel/ServiceCmd/DeployApp 8.28
154 TestFunctional/parallel/ProfileCmd/profile_not_create 1
155 TestFunctional/parallel/ProfileCmd/profile_list 0.87
156 TestFunctional/parallel/ProfileCmd/profile_json_output 1.19
157 TestFunctional/parallel/ServiceCmd/List 1.33
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.21
159 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
160 TestFunctional/parallel/ServiceCmd/Format 15.01
161 TestFunctional/parallel/ServiceCmd/URL 15.01
162 TestFunctional/delete_echo-server_images 0.14
163 TestFunctional/delete_my-image_image 0.06
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 9.61
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 3.69
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.17
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.59
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 4.39
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.36
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 1.15
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 1.56
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.69
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.4
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.14
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 3.3
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.54
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 3.21
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.61
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 2.66
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.15
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.95
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.3
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.32
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.31
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.88
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.81
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.8
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.48
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.45
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.47
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.46
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.87
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.22
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.8
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.57
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.67
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.93
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.21
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.89
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.14
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.06
260 TestMultiControlPlane/serial/StartCluster 240.88
261 TestMultiControlPlane/serial/DeployApp 8.92
262 TestMultiControlPlane/serial/PingHostFromPods 2.5
263 TestMultiControlPlane/serial/AddWorkerNode 55.38
264 TestMultiControlPlane/serial/NodeLabels 0.14
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.96
266 TestMultiControlPlane/serial/CopyFile 33.57
267 TestMultiControlPlane/serial/StopSecondaryNode 13.49
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.56
269 TestMultiControlPlane/serial/RestartSecondaryNode 103.68
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.04
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 302.98
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.44
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.5
274 TestMultiControlPlane/serial/StopCluster 35.47
275 TestMultiControlPlane/serial/RestartCluster 81.32
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.64
277 TestMultiControlPlane/serial/AddSecondaryNode 96.39
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2
281 TestImageBuild/serial/Setup 48.55
282 TestImageBuild/serial/NormalBuild 3.89
283 TestImageBuild/serial/BuildWithBuildArg 2.42
284 TestImageBuild/serial/BuildWithDockerIgnore 1.19
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.23
290 TestJSONOutput/start/Command 76.68
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.13
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.88
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.18
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.67
315 TestKicCustomNetwork/create_custom_network 53.47
316 TestKicCustomNetwork/use_default_bridge_network 53.43
317 TestKicExistingNetwork 54.49
318 TestKicCustomSubnet 54.05
319 TestKicStaticIP 54.38
320 TestMainNoArgs 0.17
321 TestMinikubeProfile 98.76
324 TestMountStart/serial/StartWithMountFirst 13.85
325 TestMountStart/serial/VerifyMountFirst 0.58
326 TestMountStart/serial/StartWithMountSecond 13.55
327 TestMountStart/serial/VerifyMountSecond 0.55
328 TestMountStart/serial/DeleteFirst 2.43
329 TestMountStart/serial/VerifyMountPostDelete 0.55
330 TestMountStart/serial/Stop 1.87
331 TestMountStart/serial/RestartStopped 10.8
332 TestMountStart/serial/VerifyMountPostStop 0.55
335 TestMultiNode/serial/FreshStart2Nodes 131.07
336 TestMultiNode/serial/DeployApp2Nodes 7.19
337 TestMultiNode/serial/PingHostFrom2Pods 1.8
338 TestMultiNode/serial/AddNode 53.66
339 TestMultiNode/serial/MultiNodeLabels 0.14
340 TestMultiNode/serial/ProfileList 1.39
341 TestMultiNode/serial/CopyFile 19.35
342 TestMultiNode/serial/StopNode 3.89
343 TestMultiNode/serial/StartAfterStop 13.24
344 TestMultiNode/serial/RestartKeepsNodes 85.93
345 TestMultiNode/serial/DeleteNode 8.28
346 TestMultiNode/serial/StopMultiNode 24.01
347 TestMultiNode/serial/RestartMultiNode 56.64
348 TestMultiNode/serial/ValidateNameConflict 49.81
352 TestPreload 159.98
353 TestScheduledStopWindows 113.76
357 TestInsufficientStorage 28.76
358 TestRunningBinaryUpgrade 128.46
361 TestMissingContainerUpgrade 141.65
364 TestNoKubernetes/serial/StartNoK8sWithVersion 0.28
375 TestNoKubernetes/serial/StartWithK8s 101.22
376 TestStoppedBinaryUpgrade/Setup 1.63
377 TestStoppedBinaryUpgrade/Upgrade 410.61
378 TestNoKubernetes/serial/StartWithStopK8s 29.88
379 TestNoKubernetes/serial/Start 14.7
380 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
381 TestNoKubernetes/serial/VerifyK8sNotRunning 0.63
382 TestNoKubernetes/serial/ProfileList 38.04
383 TestNoKubernetes/serial/Stop 2.01
384 TestNoKubernetes/serial/StartNoArgs 11.89
385 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.55
394 TestPause/serial/Start 81.47
395 TestPause/serial/SecondStartNoReconfiguration 43.81
396 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
397 TestPause/serial/Pause 1.18
398 TestPause/serial/VerifyStatus 0.64
399 TestPause/serial/Unpause 0.93
400 TestPause/serial/PauseAgain 1.21
401 TestPause/serial/DeletePaused 5.53
402 TestPause/serial/VerifyDeletedResources 1.34
403 TestNetworkPlugins/group/auto/Start 84.26
404 TestNetworkPlugins/group/flannel/Start 72.58
405 TestNetworkPlugins/group/flannel/ControllerPod 6.01
406 TestNetworkPlugins/group/auto/KubeletFlags 0.56
407 TestNetworkPlugins/group/auto/NetCatPod 16.56
408 TestNetworkPlugins/group/flannel/KubeletFlags 0.56
409 TestNetworkPlugins/group/flannel/NetCatPod 16.49
410 TestNetworkPlugins/group/auto/DNS 0.26
411 TestNetworkPlugins/group/auto/Localhost 0.21
412 TestNetworkPlugins/group/auto/HairPin 0.21
413 TestNetworkPlugins/group/flannel/DNS 0.24
414 TestNetworkPlugins/group/flannel/Localhost 0.21
415 TestNetworkPlugins/group/flannel/HairPin 0.2
416 TestNetworkPlugins/group/enable-default-cni/Start 100.98
417 TestNetworkPlugins/group/bridge/Start 93.15
418 TestNetworkPlugins/group/kubenet/Start 96.87
419 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.56
420 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.49
421 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
422 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
423 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
424 TestNetworkPlugins/group/bridge/KubeletFlags 0.57
425 TestNetworkPlugins/group/bridge/NetCatPod 14.47
426 TestNetworkPlugins/group/kubenet/KubeletFlags 0.57
427 TestNetworkPlugins/group/kubenet/NetCatPod 16.58
428 TestNetworkPlugins/group/bridge/DNS 0.26
429 TestNetworkPlugins/group/bridge/Localhost 0.23
430 TestNetworkPlugins/group/bridge/HairPin 0.46
431 TestNetworkPlugins/group/kubenet/DNS 0.29
432 TestNetworkPlugins/group/kubenet/Localhost 0.21
433 TestNetworkPlugins/group/kubenet/HairPin 0.21
434 TestNetworkPlugins/group/calico/Start 119.02
435 TestNetworkPlugins/group/kindnet/Start 80.11
436 TestNetworkPlugins/group/custom-flannel/Start 75.04
437 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
438 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.59
439 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.41
440 TestNetworkPlugins/group/kindnet/KubeletFlags 0.63
441 TestNetworkPlugins/group/kindnet/NetCatPod 17.64
442 TestNetworkPlugins/group/calico/ControllerPod 6.01
443 TestNetworkPlugins/group/custom-flannel/DNS 0.24
444 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
445 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
446 TestNetworkPlugins/group/calico/KubeletFlags 0.56
447 TestNetworkPlugins/group/calico/NetCatPod 15.51
448 TestNetworkPlugins/group/kindnet/DNS 0.25
449 TestNetworkPlugins/group/kindnet/Localhost 0.23
450 TestNetworkPlugins/group/kindnet/HairPin 0.23
451 TestNetworkPlugins/group/calico/DNS 0.3
452 TestNetworkPlugins/group/calico/Localhost 0.22
453 TestNetworkPlugins/group/calico/HairPin 0.22
454 TestNetworkPlugins/group/false/Start 102.42
456 TestStartStop/group/old-k8s-version/serial/FirstStart 108.21
460 TestStartStop/group/embed-certs/serial/FirstStart 81.13
461 TestNetworkPlugins/group/false/KubeletFlags 0.58
462 TestNetworkPlugins/group/false/NetCatPod 13.62
463 TestStartStop/group/old-k8s-version/serial/DeployApp 9.64
464 TestNetworkPlugins/group/false/DNS 0.25
465 TestNetworkPlugins/group/false/Localhost 0.23
466 TestNetworkPlugins/group/false/HairPin 0.21
467 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.67
468 TestStartStop/group/old-k8s-version/serial/Stop 12.17
469 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.63
470 TestStartStop/group/old-k8s-version/serial/SecondStart 57.18
472 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.71
473 TestStartStop/group/embed-certs/serial/DeployApp 9.61
474 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.55
475 TestStartStop/group/embed-certs/serial/Stop 12.45
476 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.58
477 TestStartStop/group/embed-certs/serial/SecondStart 49.62
478 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
479 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.32
480 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.47
481 TestStartStop/group/old-k8s-version/serial/Pause 5.21
484 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
485 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
486 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
487 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
488 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.3
489 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
490 TestStartStop/group/embed-certs/serial/Pause 5.41
491 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.57
492 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.84
493 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
494 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.23
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.48
496 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.98
499 TestStartStop/group/no-preload/serial/Stop 1.85
500 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.51
502 TestStartStop/group/newest-cni/serial/DeployApp 0
504 TestStartStop/group/newest-cni/serial/Stop 1.88
505 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.53
508 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
x
+
TestDownloadOnly/v1.28.0/json-events (9.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-781900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-781900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (9.2800172s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 19:29:03.604888   13396 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1212 19:29:03.646816   13396 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-781900
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-781900: exit status 85 (233.4521ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-781900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-781900 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:54
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:54.394562    1500 out.go:360] Setting OutFile to fd 752 ...
	I1212 19:28:54.437401    1500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:54.437401    1500 out.go:374] Setting ErrFile to fd 756...
	I1212 19:28:54.437401    1500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1212 19:28:54.448551    1500 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1212 19:28:54.454636    1500 out.go:368] Setting JSON to true
	I1212 19:28:54.456783    1500 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1872,"bootTime":1765565861,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:28:54.456783    1500 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:28:54.468822    1500 out.go:99] [download-only-781900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:28:54.468822    1500 notify.go:221] Checking for updates...
	W1212 19:28:54.468822    1500 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1212 19:28:54.473805    1500 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:28:54.484938    1500 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:28:54.488243    1500 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:28:54.490501    1500 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 19:28:54.494322    1500 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:28:54.495296    1500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:54.692881    1500 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:28:54.697150    1500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:55.380344    1500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:28:55.355483603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:28:55.383836    1500 out.go:99] Using the docker driver based on user configuration
	I1212 19:28:55.383932    1500 start.go:309] selected driver: docker
	I1212 19:28:55.383962    1500 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:55.389721    1500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:55.633850    1500 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:28:55.616371213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:28:55.634962    1500 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:55.685915    1500 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1212 19:28:55.686535    1500 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:28:55.696047    1500 out.go:171] Using Docker Desktop driver with root privileges
	I1212 19:28:55.699456    1500 cni.go:84] Creating CNI manager for ""
	I1212 19:28:55.699456    1500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:28:55.699456    1500 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:28:55.699456    1500 start.go:353] cluster config:
	{Name:download-only-781900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-781900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:28:55.702601    1500 out.go:99] Starting "download-only-781900" primary control-plane node in "download-only-781900" cluster
	I1212 19:28:55.702601    1500 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:28:55.703944    1500 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:28:55.703944    1500 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1212 19:28:55.704937    1500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:28:55.740270    1500 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1212 19:28:55.740355    1500 cache.go:65] Caching tarball of preloaded images
	I1212 19:28:55.740819    1500 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1212 19:28:55.743554    1500 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1212 19:28:55.743600    1500 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1212 19:28:55.760585    1500 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:55.760585    1500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765505794-22112@sha256_ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar
	I1212 19:28:55.760585    1500 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765505794-22112@sha256_ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar
	I1212 19:28:55.760585    1500 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:28:55.761593    1500 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:55.819540    1500 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1212 19:28:55.820096    1500 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-781900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-781900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1410973s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-781900
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (7.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-504500 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-504500 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker: (7.4814954s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (7.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 19:29:13.314383   13396 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1212 19:29:13.314383   13396 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
--- PASS: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-504500
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-504500: exit status 85 (511.0268ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-781900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-781900 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-781900                                                                                                                           │ download-only-781900 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-504500 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker │ download-only-504500 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:05
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:05.904082    1580 out.go:360] Setting OutFile to fd 864 ...
	I1212 19:29:05.946166    1580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:05.946166    1580 out.go:374] Setting ErrFile to fd 868...
	I1212 19:29:05.946166    1580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:05.960131    1580 out.go:368] Setting JSON to true
	I1212 19:29:05.962966    1580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1883,"bootTime":1765565861,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:29:05.962966    1580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:29:05.967831    1580 out.go:99] [download-only-504500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:29:05.968035    1580 notify.go:221] Checking for updates...
	I1212 19:29:05.969348    1580 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:29:05.972243    1580 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:29:05.974771    1580 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:05.985069    1580 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 19:29:05.989777    1580 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:05.990373    1580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:06.105663    1580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:29:06.110219    1580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:06.346470    1580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:29:06.327267968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:29:06.565796    1580 out.go:99] Using the docker driver based on user configuration
	I1212 19:29:06.566139    1580 start.go:309] selected driver: docker
	I1212 19:29:06.566139    1580 start.go:927] validating driver "docker" against <nil>
	I1212 19:29:06.573253    1580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:06.816069    1580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:29:06.798459818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:29:06.816069    1580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:06.850505    1580 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1212 19:29:06.851087    1580 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:07.152765    1580 out.go:171] Using Docker Desktop driver with root privileges
	I1212 19:29:07.155366    1580 cni.go:84] Creating CNI manager for ""
	I1212 19:29:07.155700    1580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 19:29:07.155700    1580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:07.155941    1580 start.go:353] cluster config:
	{Name:download-only-504500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-504500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:07.159399    1580 out.go:99] Starting "download-only-504500" primary control-plane node in "download-only-504500" cluster
	I1212 19:29:07.159399    1580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1212 19:29:07.161458    1580 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:29:07.161989    1580 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 19:29:07.162043    1580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:29:07.196477    1580 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1212 19:29:07.196539    1580 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:07.197474    1580 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1212 19:29:07.200263    1580 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1212 19:29:07.200263    1580 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1212 19:29:07.218546    1580 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:29:07.218546    1580 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765505794-22112@sha256_ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar
	I1212 19:29:07.219553    1580 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765505794-22112@sha256_ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138.tar
	I1212 19:29:07.219553    1580 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:29:07.219553    1580 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 19:29:07.219553    1580 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 19:29:07.219553    1580 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 19:29:07.264852    1580 preload.go:295] Got checksum from GCS API "cafa99c47d4d00983a02f051962239e0"
	I1212 19:29:07.265046    1580 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4?checksum=md5:cafa99c47d4d00983a02f051962239e0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-504500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-504500
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-443800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-443800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker: (4.8556716s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 19:29:20.049207   13396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1212 19:29:20.049207   13396 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-443800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-443800: exit status 85 (254.0416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                           │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-781900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker        │ download-only-781900 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-781900                                                                                                                                  │ download-only-781900 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-504500 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker        │ download-only-504500 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-504500                                                                                                                                  │ download-only-504500 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-443800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker │ download-only-443800 │ minikube4\jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:15
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:15.268144   13932 out.go:360] Setting OutFile to fd 892 ...
	I1212 19:29:15.311141   13932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:15.311141   13932 out.go:374] Setting ErrFile to fd 904...
	I1212 19:29:15.311141   13932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:15.325143   13932 out.go:368] Setting JSON to true
	I1212 19:29:15.328139   13932 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1893,"bootTime":1765565861,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:29:15.328139   13932 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:29:15.333132   13932 out.go:99] [download-only-443800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:29:15.333132   13932 notify.go:221] Checking for updates...
	I1212 19:29:15.335132   13932 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:29:15.337133   13932 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:29:15.340135   13932 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:15.343134   13932 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 19:29:15.348132   13932 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:15.348132   13932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:15.455141   13932 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:29:15.458133   13932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:15.692451   13932 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:29:15.672846591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:29:15.701650   13932 out.go:99] Using the docker driver based on user configuration
	I1212 19:29:15.701650   13932 start.go:309] selected driver: docker
	I1212 19:29:15.701650   13932 start.go:927] validating driver "docker" against <nil>
	I1212 19:29:15.707610   13932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:15.930375   13932 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-12 19:29:15.913924595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:29:15.930375   13932 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:15.965825   13932 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1212 19:29:15.967096   13932 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:15.970631   13932 out.go:171] Using Docker Desktop driver with root privileges
	
	
	* The control-plane node download-only-443800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-443800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-443800
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-192900 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-192900 --alsologtostderr --driver=docker: (1.0183386s)
helpers_test.go:176: Cleaning up "download-docker-192900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-192900
--- PASS: TestDownloadOnlyKic (1.53s)

                                                
                                    
x
+
TestBinaryMirror (2.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 19:29:25.107524   13396 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-868900 --alsologtostderr --binary-mirror http://127.0.0.1:54499 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-868900 --alsologtostderr --binary-mirror http://127.0.0.1:54499 --driver=docker: (1.6928493s)
helpers_test.go:176: Cleaning up "binary-mirror-868900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-868900
--- PASS: TestBinaryMirror (2.54s)

                                                
                                    
x
+
TestOffline (145.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-601600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-601600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m21.6122363s)
helpers_test.go:176: Cleaning up "offline-docker-601600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-601600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-601600: (4.1185113s)
--- PASS: TestOffline (145.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-349200
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-349200: exit status 85 (202.0902ms)

                                                
                                                
-- stdout --
	* Profile "addons-349200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-349200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-349200
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-349200: exit status 85 (209.6508ms)

                                                
                                                
-- stdout --
	* Profile "addons-349200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-349200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (326.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-349200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-349200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (5m26.5073141s)
--- PASS: TestAddons/Setup (326.51s)

                                                
                                    
x
+
TestAddons/serial/Volcano (51.04s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 17.2637ms
addons_test.go:878: volcano-admission stabilized in 17.2637ms
addons_test.go:870: volcano-scheduler stabilized in 17.2637ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-vx485" [6fadf54b-9058-470a-9d50-bfa6ff4906d4] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0061372s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-p54gd" [186c0137-7ec0-437f-b684-9ce58213c425] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0073698s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-g9ft2" [0e4d74a5-5ce2-49e9-989d-97c837421d11] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0075755s
addons_test.go:905: (dbg) Run:  kubectl --context addons-349200 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-349200 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-349200 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [6f72681f-64b8-4cea-8ba2-073718e5b6e3] Pending
helpers_test.go:353: "test-job-nginx-0" [6f72681f-64b8-4cea-8ba2-073718e5b6e3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [6f72681f-64b8-4cea-8ba2-073718e5b6e3] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0067402s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable volcano --alsologtostderr -v=1: (12.2500991s)
--- PASS: TestAddons/serial/Volcano (51.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-349200 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-349200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-349200 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-349200 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4b61dcb9-6ade-463f-b03a-17c43b33c5d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4b61dcb9-6ade-463f-b03a-17c43b33c5d6] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0064591s
addons_test.go:696: (dbg) Run:  kubectl --context addons-349200 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-349200 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-349200 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-349200 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 8.3823ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-349200
addons_test.go:334: (dbg) Run:  kubectl --context addons-349200 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-7xjd7" [7ace374d-4449-436a-922c-aac22da314a0] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0055238s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable inspektor-gadget --alsologtostderr -v=1: (6.1337484s)
--- PASS: TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.7893ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-xgfvf" [13168cc9-cf77-45fe-9329-44a3c628e086] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0116907s
addons_test.go:465: (dbg) Run:  kubectl --context addons-349200 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable metrics-server --alsologtostderr -v=1: (3.2258789s)
--- PASS: TestAddons/parallel/MetricsServer (9.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 19:36:27.748987   13396 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 19:36:27.787890   13396 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 19:36:27.787890   13396 kapi.go:107] duration metric: took 38.9584ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 38.9584ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-349200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-349200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [7ff29cce-5893-42bf-94a6-de14074fbc4a] Pending
helpers_test.go:353: "task-pv-pod" [7ff29cce-5893-42bf-94a6-de14074fbc4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [7ff29cce-5893-42bf-94a6-de14074fbc4a] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.0064386s
addons_test.go:574: (dbg) Run:  kubectl --context addons-349200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-349200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-349200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-349200 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-349200 delete pod task-pv-pod: (1.4030991s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-349200 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-349200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-349200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [ac6b339d-59fe-45e5-92e7-5d1e6a5b6a7a] Pending
helpers_test.go:353: "task-pv-pod-restore" [ac6b339d-59fe-45e5-92e7-5d1e6a5b6a7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [ac6b339d-59fe-45e5-92e7-5d1e6a5b6a7a] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0067429s
addons_test.go:616: (dbg) Run:  kubectl --context addons-349200 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-349200 delete pod task-pv-pod-restore: (1.6567534s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-349200 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-349200 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable volumesnapshots --alsologtostderr -v=1: (1.3439149s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.529591s)
--- PASS: TestAddons/parallel/CSI (61.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (31.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-349200 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-349200 --alsologtostderr -v=1: (1.2385323s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-w5rdg" [a2e0923d-1941-4f96-a544-a527bd9e9dac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-w5rdg" [a2e0923d-1941-4f96-a544-a527bd9e9dac] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0725322s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable headlamp --alsologtostderr -v=1: (8.115013s)
--- PASS: TestAddons/parallel/Headlamp (31.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-jk9hl" [c3b2a19d-4e41-462b-83ee-46b9db91ed9e] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0072203s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (7.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (21.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-349200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-349200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [5bfdcb93-c752-4b76-9166-d76a097154bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [5bfdcb93-c752-4b76-9166-d76a097154bf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [5bfdcb93-c752-4b76-9166-d76a097154bf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.0050813s
addons_test.go:969: (dbg) Run:  kubectl --context addons-349200 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 ssh "cat /opt/local-path-provisioner/pvc-1f44f45b-ddda-4003-b3a6-a66093b745e8_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-349200 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-349200 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (21.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.97s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-vpgzd" [f9126e83-75fc-451d-b6f4-ab63362e7fd5] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0120053s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.9508241s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.97s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-lf267" [58b8d2eb-6f6d-4a16-a97f-9521af3a60db] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0071242s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable yakd --alsologtostderr -v=1: (6.1752795s)
--- PASS: TestAddons/parallel/Yakd (11.19s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.61s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-96stw" [9ecfc32a-f950-418b-83f6-ef3c1c2f9cec] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0053831s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.6055423s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-349200
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-349200: (12.2200203s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-349200
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-349200
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-349200
--- PASS: TestAddons/StoppedEnableDisable (13.03s)

                                                
                                    
x
+
TestCertOptions (54.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-249600 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-249600 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (48.8934541s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-249600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1212 21:11:42.554650   13396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-249600
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-249600 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-249600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-249600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-249600: (4.0493523s)
--- PASS: TestCertOptions (54.50s)

                                                
                                    
x
+
TestCertExpiration (266.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-009400 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-009400 --memory=3072 --cert-expiration=3m --driver=docker: (47.6124731s)
E1212 21:09:54.823862   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-009400 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-009400 --memory=3072 --cert-expiration=8760h --driver=docker: (34.5756074s)
helpers_test.go:176: Cleaning up "cert-expiration-009400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-009400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-009400: (4.0767415s)
--- PASS: TestCertExpiration (266.27s)

                                                
                                    
x
+
TestDockerFlags (58.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-843400 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-843400 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (53.360235s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-843400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-843400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-843400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-843400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-843400: (3.7932413s)
--- PASS: TestDockerFlags (58.35s)

                                                
                                    
x
+
TestForceSystemdFlag (107.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-601600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-601600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m42.0770244s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-601600 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-601600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-601600
E1212 21:05:18.011639   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-601600: (4.6465068s)
--- PASS: TestForceSystemdFlag (107.53s)

                                                
                                    
x
+
TestForceSystemdEnv (52.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-427100 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-427100 --memory=3072 --alsologtostderr -v=5 --driver=docker: (48.0226637s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-427100 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-427100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-427100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-427100: (3.8499703s)
--- PASS: TestForceSystemdEnv (52.49s)

                                                
                                    
x
+
TestErrorSpam/start (2.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 start --dry-run
--- PASS: TestErrorSpam/start (2.58s)

                                                
                                    
x
+
TestErrorSpam/status (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 status
--- PASS: TestErrorSpam/status (2.08s)

                                                
                                    
x
+
TestErrorSpam/pause (2.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 pause: (1.1244594s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 pause
--- PASS: TestErrorSpam/pause (2.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 unpause
--- PASS: TestErrorSpam/unpause (2.62s)

                                                
                                    
x
+
TestErrorSpam/stop (18.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop: (12.0440985s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop: (3.1081194s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-169700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-169700 stop: (3.181047s)
--- PASS: TestErrorSpam/stop (18.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1212 19:39:54.758361   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:54.765331   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:54.777158   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:54.798486   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:54.840438   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:54.922385   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:55.084267   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:55.406647   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:56.048636   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:57.329910   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:39:59.891733   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:40:05.014287   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:40:15.256273   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-461000 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m18.2316398s)
--- PASS: TestFunctional/serial/StartWithProxy (78.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 19:40:26.275575   13396 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --alsologtostderr -v=8
E1212 19:40:35.738432   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-461000 --alsologtostderr -v=8: (47.455046s)
functional_test.go:678: soft start took 47.4578139s for "functional-461000" cluster.
I1212 19:41:13.732125   13396 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (47.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-461000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:3.1
E1212 19:41:16.701261   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:3.1: (3.6687432s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:3.3: (3.1707867s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 cache add registry.k8s.io/pause:latest: (3.1411278s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-461000 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1436569247\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-461000 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1436569247\001: (1.3446415s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache add minikube-local-cache-test:functional-461000
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 cache add minikube-local-cache-test:functional-461000: (2.5991584s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache delete minikube-local-cache-test:functional-461000
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-461000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (590.4221ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 cache reload: (2.7080946s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 kubectl -- --context functional-461000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-461000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-461000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.3019485s)
functional_test.go:776: restart took 43.3030568s for "functional-461000" cluster.
I1212 19:42:19.930749   13396 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (43.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-461000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 logs: (1.7657725s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4170630621\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4170630621\001\logs.txt: (1.8765297s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-461000 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-461000
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-461000: exit status 115 (1.0551333s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31676 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-461000 delete -f testdata\invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-461000 delete -f testdata\invalidsvc.yaml: (1.090761s)
--- PASS: TestFunctional/serial/InvalidService (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 config get cpus: exit status 14 (168.6509ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 config get cpus: exit status 14 (174.3471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (655.0873ms)

                                                
                                                
-- stdout --
	* [functional-461000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:43:40.723029    9676 out.go:360] Setting OutFile to fd 2008 ...
	I1212 19:43:40.770597    9676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:43:40.770597    9676 out.go:374] Setting ErrFile to fd 2036...
	I1212 19:43:40.770637    9676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:43:40.783539    9676 out.go:368] Setting JSON to false
	I1212 19:43:40.786557    9676 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2758,"bootTime":1765565861,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:43:40.786557    9676 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:43:40.791683    9676 out.go:179] * [functional-461000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:43:40.796886    9676 notify.go:221] Checking for updates...
	I1212 19:43:40.799773    9676 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:43:40.803401    9676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:43:40.806931    9676 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:43:40.810355    9676 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:43:40.813661    9676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:43:40.817257    9676 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 19:43:40.818168    9676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:43:40.941341    9676 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:43:40.945452    9676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:43:41.195023    9676 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:43:41.171217105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:43:41.213408    9676 out.go:179] * Using the docker driver based on existing profile
	I1212 19:43:41.217292    9676 start.go:309] selected driver: docker
	I1212 19:43:41.217292    9676 start.go:927] validating driver "docker" against &{Name:functional-461000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-461000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:43:41.217957    9676 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:43:41.257470    9676 out.go:203] 
	W1212 19:43:41.259522    9676 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:43:41.263385    9676 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-461000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (623.293ms)

                                                
                                                
-- stdout --
	* [functional-461000] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:43:38.148306   10204 out.go:360] Setting OutFile to fd 1912 ...
	I1212 19:43:38.194863   10204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:43:38.194863   10204 out.go:374] Setting ErrFile to fd 1868...
	I1212 19:43:38.194863   10204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:43:38.207856   10204 out.go:368] Setting JSON to false
	I1212 19:43:38.210576   10204 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2756,"bootTime":1765565861,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 19:43:38.210576   10204 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 19:43:38.213865   10204 out.go:179] * [functional-461000] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 19:43:38.217542   10204 notify.go:221] Checking for updates...
	I1212 19:43:38.217542   10204 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 19:43:38.220074   10204 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:43:38.222058   10204 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 19:43:38.223817   10204 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:43:38.225559   10204 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:43:38.228111   10204 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 19:43:38.228111   10204 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:43:38.349472   10204 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 19:43:38.352462   10204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:43:38.584824   10204 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 19:43:38.564898819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 19:43:38.591824   10204 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 19:43:38.594823   10204 start.go:309] selected driver: docker
	I1212 19:43:38.594823   10204 start.go:927] validating driver "docker" against &{Name:functional-461000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-461000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:43:38.594823   10204 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:43:38.644651   10204 out.go:203] 
	W1212 19:43:38.646678   10204 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:43:38.648999   10204 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (64.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [dcf9028f-8f88-496d-8c04-f1a95b7787c5] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0069431s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-461000 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-461000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-461000 get pvc myclaim -o=json
I1212 19:42:45.466077   13396 retry.go:31] will retry after 2.144409069s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:5f35cb2c-df12-4fd5-a7fb-42ebd6655c39 ResourceVersion:799 Generation:0 CreationTimestamp:2025-12-12 19:42:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c0c7a0 VolumeMode:0xc001c0c7b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-461000 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-461000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [94f8ca69-319f-44c1-a958-3b1dfc35c8a1] Pending
helpers_test.go:353: "sp-pod" [94f8ca69-319f-44c1-a958-3b1dfc35c8a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [94f8ca69-319f-44c1-a958-3b1dfc35c8a1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 46.0054198s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-461000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-461000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-461000 delete -f testdata/storage-provisioner/pod.yaml: (1.907103s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-461000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [de1b6249-3eed-4955-a209-58379714fec2] Pending
helpers_test.go:353: "sp-pod" [de1b6249-3eed-4955-a209-58379714fec2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [de1b6249-3eed-4955-a209-58379714fec2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0075952s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-461000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (64.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh -n functional-461000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cp functional-461000:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd640105592\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh -n functional-461000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh -n functional-461000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (82.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-461000 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-jnxxw" [614b9beb-c781-4e59-ba5a-2a6b9c0b68b3] Pending
helpers_test.go:353: "mysql-6bcdcbc558-jnxxw" [614b9beb-c781-4e59-ba5a-2a6b9c0b68b3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-jnxxw" [614b9beb-c781-4e59-ba5a-2a6b9c0b68b3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 57.0049878s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (226ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:29.076822   13396 retry.go:31] will retry after 808.038184ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (208.0499ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:30.096969   13396 retry.go:31] will retry after 929.608628ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (200.9587ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:31.232362   13396 retry.go:31] will retry after 1.279267917s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (193.1156ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:32.710890   13396 retry.go:31] will retry after 4.737795704s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (501.7967ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:37.958155   13396 retry.go:31] will retry after 4.951855023s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;": exit status 1 (266.7405ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:43.183193   13396 retry.go:31] will retry after 10.764789496s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-461000 exec mysql-6bcdcbc558-jnxxw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (82.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13396/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /etc/test/nested/copy/13396/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13396.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /etc/ssl/certs/13396.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13396.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /usr/share/ca-certificates/13396.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/133962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /etc/ssl/certs/133962.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/133962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /usr/share/ca-certificates/133962.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-461000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 ssh "sudo systemctl is-active crio": exit status 1 (586.9338ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.5542836s)
--- PASS: TestFunctional/parallel/License (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-461000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/my-image:functional-461000
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-461000
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-461000
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-461000 image ls --format short --alsologtostderr:
I1212 19:43:51.855860   11832 out.go:360] Setting OutFile to fd 1696 ...
I1212 19:43:51.898216   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.898216   11832 out.go:374] Setting ErrFile to fd 1492...
I1212 19:43:51.898216   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.910091   11832 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.910671   11832 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.918187   11832 cli_runner.go:164] Run: docker container inspect functional-461000 --format={{.State.Status}}
I1212 19:43:51.978271   11832 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:51.981946   11832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-461000
I1212 19:43:52.037364   11832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55369 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-461000\id_rsa Username:docker}
I1212 19:43:52.172823   11832 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-461000 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-461000 │ 2a54c8a53d03c │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-461000 │ 84bbd422d0d0c │ 30B    │
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kicbase/echo-server               │ functional-461000 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-461000 image ls --format table --alsologtostderr:
I1212 19:43:51.412759    7272 out.go:360] Setting OutFile to fd 1672 ...
I1212 19:43:51.455733    7272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.455733    7272 out.go:374] Setting ErrFile to fd 1680...
I1212 19:43:51.455733    7272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.467356    7272 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.467356    7272 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.474358    7272 cli_runner.go:164] Run: docker container inspect functional-461000 --format={{.State.Status}}
I1212 19:43:51.539391    7272 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:51.541969    7272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-461000
I1212 19:43:51.596864    7272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55369 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-461000\id_rsa Username:docker}
I1212 19:43:51.722684    7272 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-461000 image ls --format json --alsologtostderr:
[{"id":"2a54c8a53d03c72dbf44add2015eacd06bea051069dadd5f38b9afe559f87ee5","repoDigests":[],"repoTags":["localhost/my-image:functional-461000"],"size":"1240000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59
475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-461000","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"84bbd422d0d0c694e3a40656e3a849aa757670827cc8f771b1385867242c968a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-461000"],"size":"30"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4
750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-461000 image ls --format json --alsologtostderr:
I1212 19:43:50.976313     272 out.go:360] Setting OutFile to fd 1712 ...
I1212 19:43:51.019136     272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.019136     272 out.go:374] Setting ErrFile to fd 1788...
I1212 19:43:51.019136     272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:51.031781     272 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.032783     272 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:51.039071     272 cli_runner.go:164] Run: docker container inspect functional-461000 --format={{.State.Status}}
I1212 19:43:51.098553     272 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:51.101825     272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-461000
I1212 19:43:51.156005     272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55369 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-461000\id_rsa Username:docker}
I1212 19:43:51.282985     272 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-461000 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-461000
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 84bbd422d0d0c694e3a40656e3a849aa757670827cc8f771b1385867242c968a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-461000
size: "30"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-461000 image ls --format yaml --alsologtostderr:
I1212 19:43:45.650155   10984 out.go:360] Setting OutFile to fd 1984 ...
I1212 19:43:45.691663   10984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:45.691663   10984 out.go:374] Setting ErrFile to fd 1972...
I1212 19:43:45.691663   10984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:45.706231   10984 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:45.706560   10984 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:45.713180   10984 cli_runner.go:164] Run: docker container inspect functional-461000 --format={{.State.Status}}
I1212 19:43:45.777361   10984 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:45.780201   10984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-461000
I1212 19:43:45.837810   10984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55369 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-461000\id_rsa Username:docker}
I1212 19:43:45.963110   10984 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 ssh pgrep buildkitd: exit status 1 (550.9782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr: (3.8807581s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-461000 image build -t localhost/my-image:functional-461000 testdata\build --alsologtostderr:
I1212 19:43:46.650145    9352 out.go:360] Setting OutFile to fd 956 ...
I1212 19:43:46.713448    9352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:46.713448    9352 out.go:374] Setting ErrFile to fd 1952...
I1212 19:43:46.713448    9352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:46.726561    9352 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:46.747124    9352 config.go:182] Loaded profile config "functional-461000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1212 19:43:46.753784    9352 cli_runner.go:164] Run: docker container inspect functional-461000 --format={{.State.Status}}
I1212 19:43:46.813486    9352 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:46.817287    9352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-461000
I1212 19:43:46.872515    9352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55369 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-461000\id_rsa Username:docker}
I1212 19:43:46.998775    9352 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1402084190.tar
I1212 19:43:47.003800    9352 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:43:47.024029    9352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1402084190.tar
I1212 19:43:47.031961    9352 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1402084190.tar: stat -c "%s %y" /var/lib/minikube/build/build.1402084190.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1402084190.tar': No such file or directory
I1212 19:43:47.031961    9352 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1402084190.tar --> /var/lib/minikube/build/build.1402084190.tar (3072 bytes)
I1212 19:43:47.062763    9352 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1402084190
I1212 19:43:47.081530    9352 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1402084190 -xf /var/lib/minikube/build/build.1402084190.tar
I1212 19:43:47.097951    9352 docker.go:361] Building image: /var/lib/minikube/build/build.1402084190
I1212 19:43:47.101490    9352 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-461000 /var/lib/minikube/build/build.1402084190
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:2a54c8a53d03c72dbf44add2015eacd06bea051069dadd5f38b9afe559f87ee5 done
#8 naming to localhost/my-image:functional-461000 0.0s done
#8 DONE 0.2s
I1212 19:43:50.385705    9352 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-461000 /var/lib/minikube/build/build.1402084190: (3.283636s)
I1212 19:43:50.390485    9352 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1402084190
I1212 19:43:50.408754    9352 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1402084190.tar
I1212 19:43:50.422137    9352 build_images.go:218] Built localhost/my-image:functional-461000 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1402084190.tar
I1212 19:43:50.423135    9352 build_images.go:134] succeeded building to: functional-461000
I1212 19:43:50.423135    9352 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7018776s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-461000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-461000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-461000"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-461000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-461000": (3.4094104s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-461000 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-461000 docker-env | Invoke-Expression ; docker images": (2.6689444s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr: (3.0528134s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr: (3.6716886s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1852: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (54.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-461000 apply -f testdata\testsvc.yaml
E1212 19:42:38.624597   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [43d566e2-0778-4d01-9fc8-d166cc3fc0b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [43d566e2-0778-4d01-9fc8-d166cc3fc0b8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 54.007398s
I1212 19:43:32.993768   13396 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (54.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.6738809s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-461000
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image load --daemon kicbase/echo-server:functional-461000 --alsologtostderr: (3.3009337s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image save kicbase/echo-server:functional-461000 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image save kicbase/echo-server:functional-461000 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.0752643s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image rm kicbase/echo-server:functional-461000 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-461000
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 image save --daemon kicbase/echo-server:functional-461000 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 image save --daemon kicbase/echo-server:functional-461000 --alsologtostderr: (2.5911858s)
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-461000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-461000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-461000 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 4752: TerminateProcess: Access is denied.
helpers_test.go:526: unable to kill pid 10236: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-461000 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-461000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-n2rsx" [7efe1545-939e-4468-82ac-e7737cd4338e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-n2rsx" [7efe1545-939e-4468-82ac-e7737cd4338e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0082004s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "706.0115ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "163.4649ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1376: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.0241109s)
functional_test.go:1381: Took "1.0250508s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "164.8796ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 service list
functional_test.go:1469: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 service list: (1.3310731s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-windows-amd64.exe -p functional-461000 service list -o json: (1.2086976s)
functional_test.go:1504: Took "1.2086976s" to run "out/minikube-windows-amd64.exe -p functional-461000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 service --namespace=default --https --url hello-node: exit status 1 (15.0100213s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55719

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:55719
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 service hello-node --url --format={{.IP}}: exit status 1 (15.0095204s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-461000 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-461000 service hello-node --url: exit status 1 (15.0105157s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55741

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:55741
E1212 19:44:54.759436   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:22.468311   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-461000
--- PASS: TestFunctional/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-461000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-461000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13396\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:3.1: (3.494135s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:3.3: (3.067151s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 cache add registry.k8s.io/pause:latest: (3.0521676s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-468800 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2401020456\001
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache add minikube-local-cache-test:functional-468800
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 cache add minikube-local-cache-test:functional-468800: (2.5933571s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache delete minikube-local-cache-test:functional-468800
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (559.7402ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 cache reload: (2.6835949s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs: (1.2245329s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3016148659\001\logs.txt
E1212 20:19:54.781807   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3016148659\001\logs.txt: (1.3198221s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 config get cpus: exit status 14 (153.9365ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 config get cpus: exit status 14 (195.3852ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (666.9224ms)

                                                
                                                
-- stdout --
	* [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:22:22.432975   14144 out.go:360] Setting OutFile to fd 1268 ...
	I1212 20:22:22.475018   14144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:22.475082   14144 out.go:374] Setting ErrFile to fd 952...
	I1212 20:22:22.475126   14144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:22.490216   14144 out.go:368] Setting JSON to false
	I1212 20:22:22.493009   14144 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5080,"bootTime":1765565862,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:22:22.493131   14144 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:22:22.496837   14144 out.go:179] * [functional-468800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:22:22.499370   14144 notify.go:221] Checking for updates...
	I1212 20:22:22.501516   14144 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:22:22.503679   14144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:22:22.505108   14144 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:22:22.507104   14144 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:22:22.509107   14144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:22:22.512079   14144 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:22.512738   14144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:22.646019   14144 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:22.650023   14144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:22.886537   14144 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:22.863244897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:22.892104   14144 out.go:179] * Using the docker driver based on existing profile
	I1212 20:22:22.895105   14144 start.go:309] selected driver: docker
	I1212 20:22:22.895105   14144 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:22.895105   14144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:22.977906   14144 out.go:203] 
	W1212 20:22:22.979484   14144 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 20:22:22.982822   14144 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-468800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (687.7529ms)

                                                
                                                
-- stdout --
	* [functional-468800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:22:23.221121    3452 out.go:360] Setting OutFile to fd 1996 ...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.279090    3452 out.go:374] Setting ErrFile to fd 1012...
	I1212 20:22:23.279090    3452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:22:23.304610    3452 out.go:368] Setting JSON to false
	I1212 20:22:23.307604    3452 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5081,"bootTime":1765565862,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1212 20:22:23.307604    3452 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1212 20:22:23.311607    3452 out.go:179] * [functional-468800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1212 20:22:23.312604    3452 notify.go:221] Checking for updates...
	I1212 20:22:23.315613    3452 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1212 20:22:23.317610    3452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:22:23.319615    3452 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1212 20:22:23.322604    3452 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:22:23.324597    3452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:22:23.326596    3452 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1212 20:22:23.327598    3452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:22:23.481603    3452 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1212 20:22:23.484608    3452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:22:23.737731    3452 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-12 20:22:23.719883179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1212 20:22:23.742724    3452 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:22:23.744723    3452 start.go:309] selected driver: docker
	I1212 20:22:23.744723    3452 start.go:927] validating driver "docker" against &{Name:functional-468800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-468800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:22:23.744723    3452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:22:23.781732    3452 out.go:203] 
	W1212 20:22:23.784721    3452 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:22:23.786731    3452 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh -n functional-468800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cp functional-468800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1966122111\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh -n functional-468800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh -n functional-468800 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13396/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /etc/test/nested/copy/13396/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13396.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /etc/ssl/certs/13396.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13396.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /usr/share/ca-certificates/13396.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/133962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /etc/ssl/certs/133962.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/133962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /usr/share/ca-certificates/133962.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 ssh "sudo systemctl is-active crio": exit status 1 (610.0607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.6420925s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-468800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "655.7821ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "158.4426ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "645.8071ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "157.3165ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-468800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-468800
docker.io/kicbase/echo-server:functional-468800
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-468800 image ls --format short --alsologtostderr:
I1212 20:22:25.098457    4476 out.go:360] Setting OutFile to fd 1876 ...
I1212 20:22:25.149472    4476 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.149472    4476 out.go:374] Setting ErrFile to fd 1740...
I1212 20:22:25.149472    4476 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.161457    4476 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.161457    4476 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.169125    4476 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:22:25.231295    4476 ssh_runner.go:195] Run: systemctl --version
I1212 20:22:25.235295    4476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:22:25.285304    4476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
I1212 20:22:25.399875    4476 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-468800 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-468800 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-468800 │ 84bbd422d0d0c │ 30B    │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-468800 image ls --format table --alsologtostderr:
I1212 20:22:26.499350   10352 out.go:360] Setting OutFile to fd 1076 ...
I1212 20:22:26.550348   10352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:26.550348   10352 out.go:374] Setting ErrFile to fd 864...
I1212 20:22:26.550348   10352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:26.565366   10352 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:26.566351   10352 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:26.574366   10352 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:22:26.630355   10352 ssh_runner.go:195] Run: systemctl --version
I1212 20:22:26.634348   10352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:22:26.681348   10352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
I1212 20:22:26.798967   10352 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-468800 image ls --format json --alsologtostderr:
[{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-468800"],"size":"4940000"},{"id":"6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"84bbd422d0d0c694e3a40656e3a849aa757670827cc8f771b1385867242c968a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-468800"],"size":"30"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"
repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-468800 image ls --format json --alsologtostderr:
I1212 20:22:26.031802    4208 out.go:360] Setting OutFile to fd 832 ...
I1212 20:22:26.091805    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:26.091805    4208 out.go:374] Setting ErrFile to fd 1480...
I1212 20:22:26.091805    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:26.108821    4208 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:26.108821    4208 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:26.118832    4208 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:22:26.176689    4208 ssh_runner.go:195] Run: systemctl --version
I1212 20:22:26.179688    4208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:22:26.229694    4208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
I1212 20:22:26.353937    4208 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-468800 image ls --format yaml --alsologtostderr:
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-468800
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 84bbd422d0d0c694e3a40656e3a849aa757670827cc8f771b1385867242c968a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-468800
size: "30"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-468800 image ls --format yaml --alsologtostderr:
I1212 20:22:25.565252   10516 out.go:360] Setting OutFile to fd 2008 ...
I1212 20:22:25.622481   10516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.622481   10516 out.go:374] Setting ErrFile to fd 2024...
I1212 20:22:25.623005   10516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.636259   10516 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.636259   10516 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.643268   10516 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:22:25.701262   10516 ssh_runner.go:195] Run: systemctl --version
I1212 20:22:25.704259   10516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:22:25.753257   10516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
I1212 20:22:25.888183   10516 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-468800 ssh pgrep buildkitd: exit status 1 (529.2599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image build -t localhost/my-image:functional-468800 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 image build -t localhost/my-image:functional-468800 testdata\build --alsologtostderr: (4.0188963s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-468800 image build -t localhost/my-image:functional-468800 testdata\build --alsologtostderr:
I1212 20:22:25.899198    9748 out.go:360] Setting OutFile to fd 1864 ...
I1212 20:22:25.944801    9748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.944801    9748 out.go:374] Setting ErrFile to fd 1036...
I1212 20:22:25.944801    9748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:22:25.956810    9748 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.960811    9748 config.go:182] Loaded profile config "functional-468800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1212 20:22:25.968805    9748 cli_runner.go:164] Run: docker container inspect functional-468800 --format={{.State.Status}}
I1212 20:22:26.026816    9748 ssh_runner.go:195] Run: systemctl --version
I1212 20:22:26.029802    9748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-468800
I1212 20:22:26.095828    9748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55779 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-468800\id_rsa Username:docker}
I1212 20:22:26.214707    9748 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.204136844.tar
I1212 20:22:26.219709    9748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 20:22:26.235704    9748 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.204136844.tar
I1212 20:22:26.241699    9748 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.204136844.tar: stat -c "%s %y" /var/lib/minikube/build/build.204136844.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.204136844.tar': No such file or directory
I1212 20:22:26.241699    9748 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.204136844.tar --> /var/lib/minikube/build/build.204136844.tar (3072 bytes)
I1212 20:22:26.270689    9748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.204136844
I1212 20:22:26.290301    9748 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.204136844 -xf /var/lib/minikube/build/build.204136844.tar
I1212 20:22:26.302797    9748 docker.go:361] Building image: /var/lib/minikube/build/build.204136844
I1212 20:22:26.306398    9748 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-468800 /var/lib/minikube/build/build.204136844
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ef4e398c31c620790719b1683077a14c8a3e4e2a6aa56ce6eb1d065372a87b37 done
#8 naming to localhost/my-image:functional-468800 0.0s done
#8 DONE 0.2s
I1212 20:22:29.777513    9748 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-468800 /var/lib/minikube/build/build.204136844: (3.4710715s)
I1212 20:22:29.782041    9748 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.204136844
I1212 20:22:29.798725    9748 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.204136844.tar
I1212 20:22:29.812082    9748 build_images.go:218] Built localhost/my-image:functional-468800 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.204136844.tar
I1212 20:22:29.812082    9748 build_images.go:134] succeeded building to: functional-468800
I1212 20:22:29.812082    9748 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
E1212 20:22:31.866845   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr: (2.7564569s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr: (2.3432091s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-468800
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-468800 image load --daemon kicbase/echo-server:functional-468800 --alsologtostderr: (2.380565s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image save kicbase/echo-server:functional-468800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image rm kicbase/echo-server:functional-468800 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-468800
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-468800 image save --daemon kicbase/echo-server:functional-468800 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-468800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (240.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1212 20:24:54.785571   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:17.978224   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:17.986231   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:17.999222   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:18.022212   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:18.065213   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:18.148219   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:18.310547   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:18.632069   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:19.274227   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:20.557751   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:23.119771   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:28.242183   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:34.944362   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:38.484648   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:58.967377   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:26:39.930014   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:27:31.870500   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:28:01.853546   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m59.2592568s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.6214003s)
--- PASS: TestMultiControlPlane/serial/StartCluster (240.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 kubectl -- rollout status deployment/busybox: (3.9878136s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4nvxr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4shc5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-vpl2j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4nvxr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4shc5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-vpl2j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4nvxr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4shc5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-vpl2j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4nvxr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4nvxr -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4shc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-4shc5 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-vpl2j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 kubectl -- exec busybox-7b57f96db7-vpl2j -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 node add --alsologtostderr -v 5: (53.4634908s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.9192499s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-309900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9627246s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (33.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --output json --alsologtostderr -v 5: (1.8839848s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp testdata\cp-test.txt ha-309900:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2186043989\001\cp-test_ha-309900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900:/home/docker/cp-test.txt ha-309900-m02:/home/docker/cp-test_ha-309900_ha-309900-m02.txt
E1212 20:29:37.865086   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test_ha-309900_ha-309900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900:/home/docker/cp-test.txt ha-309900-m03:/home/docker/cp-test_ha-309900_ha-309900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test_ha-309900_ha-309900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900:/home/docker/cp-test.txt ha-309900-m04:/home/docker/cp-test_ha-309900_ha-309900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test_ha-309900_ha-309900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp testdata\cp-test.txt ha-309900-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2186043989\001\cp-test_ha-309900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m02:/home/docker/cp-test.txt ha-309900:/home/docker/cp-test_ha-309900-m02_ha-309900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test_ha-309900-m02_ha-309900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m02:/home/docker/cp-test.txt ha-309900-m03:/home/docker/cp-test_ha-309900-m02_ha-309900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test_ha-309900-m02_ha-309900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m02:/home/docker/cp-test.txt ha-309900-m04:/home/docker/cp-test_ha-309900-m02_ha-309900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test_ha-309900-m02_ha-309900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp testdata\cp-test.txt ha-309900-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2186043989\001\cp-test_ha-309900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m03:/home/docker/cp-test.txt ha-309900:/home/docker/cp-test_ha-309900-m03_ha-309900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test.txt"
E1212 20:29:54.789686   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test_ha-309900-m03_ha-309900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m03:/home/docker/cp-test.txt ha-309900-m02:/home/docker/cp-test_ha-309900-m03_ha-309900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test_ha-309900-m03_ha-309900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m03:/home/docker/cp-test.txt ha-309900-m04:/home/docker/cp-test_ha-309900-m03_ha-309900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test_ha-309900-m03_ha-309900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp testdata\cp-test.txt ha-309900-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2186043989\001\cp-test_ha-309900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m04:/home/docker/cp-test.txt ha-309900:/home/docker/cp-test_ha-309900-m04_ha-309900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900 "sudo cat /home/docker/cp-test_ha-309900-m04_ha-309900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m04:/home/docker/cp-test.txt ha-309900-m02:/home/docker/cp-test_ha-309900-m04_ha-309900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m02 "sudo cat /home/docker/cp-test_ha-309900-m04_ha-309900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 cp ha-309900-m04:/home/docker/cp-test.txt ha-309900-m03:/home/docker/cp-test_ha-309900-m04_ha-309900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 ssh -n ha-309900-m03 "sudo cat /home/docker/cp-test_ha-309900-m04_ha-309900-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (33.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node stop m02 --alsologtostderr -v 5
E1212 20:30:17.981380   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 node stop m02 --alsologtostderr -v 5: (12.0027154s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: exit status 7 (1.4866994s)

                                                
                                                
-- stdout --
	ha-309900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-309900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-309900-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-309900-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:30:19.281771    2872 out.go:360] Setting OutFile to fd 1928 ...
	I1212 20:30:19.322768    2872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:30:19.322768    2872 out.go:374] Setting ErrFile to fd 1416...
	I1212 20:30:19.322768    2872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:30:19.334078    2872 out.go:368] Setting JSON to false
	I1212 20:30:19.334078    2872 mustload.go:66] Loading cluster: ha-309900
	I1212 20:30:19.334078    2872 notify.go:221] Checking for updates...
	I1212 20:30:19.334752    2872 config.go:182] Loaded profile config "ha-309900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 20:30:19.334752    2872 status.go:174] checking status of ha-309900 ...
	I1212 20:30:19.343391    2872 cli_runner.go:164] Run: docker container inspect ha-309900 --format={{.State.Status}}
	I1212 20:30:19.402348    2872 status.go:371] ha-309900 host status = "Running" (err=<nil>)
	I1212 20:30:19.402986    2872 host.go:66] Checking if "ha-309900" exists ...
	I1212 20:30:19.405954    2872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-309900
	I1212 20:30:19.455952    2872 host.go:66] Checking if "ha-309900" exists ...
	I1212 20:30:19.459955    2872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:30:19.463959    2872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-309900
	I1212 20:30:19.519953    2872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57668 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-309900\id_rsa Username:docker}
	I1212 20:30:19.643239    2872 ssh_runner.go:195] Run: systemctl --version
	I1212 20:30:19.662004    2872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:19.684995    2872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-309900
	I1212 20:30:19.741407    2872 kubeconfig.go:125] found "ha-309900" server: "https://127.0.0.1:57667"
	I1212 20:30:19.741407    2872 api_server.go:166] Checking apiserver status ...
	I1212 20:30:19.746088    2872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:30:19.773063    2872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2312/cgroup
	I1212 20:30:19.786065    2872 api_server.go:182] apiserver freezer: "7:freezer:/docker/eb571391cc90d93b454c26847dfbb89c988deac228080521e8961ffe830a5d14/kubepods/burstable/pod995d1dee8ccb097680d5a21c54a6ea07/61cef07f6b9c964dcc0bf5ef8b5b21658ee519ada7b7fdd9250d19231794118a"
	I1212 20:30:19.790063    2872 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eb571391cc90d93b454c26847dfbb89c988deac228080521e8961ffe830a5d14/kubepods/burstable/pod995d1dee8ccb097680d5a21c54a6ea07/61cef07f6b9c964dcc0bf5ef8b5b21658ee519ada7b7fdd9250d19231794118a/freezer.state
	I1212 20:30:19.802068    2872 api_server.go:204] freezer state: "THAWED"
	I1212 20:30:19.802068    2872 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57667/healthz ...
	I1212 20:30:19.810229    2872 api_server.go:279] https://127.0.0.1:57667/healthz returned 200:
	ok
	I1212 20:30:19.810229    2872 status.go:463] ha-309900 apiserver status = Running (err=<nil>)
	I1212 20:30:19.810229    2872 status.go:176] ha-309900 status: &{Name:ha-309900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:30:19.810229    2872 status.go:174] checking status of ha-309900-m02 ...
	I1212 20:30:19.818184    2872 cli_runner.go:164] Run: docker container inspect ha-309900-m02 --format={{.State.Status}}
	I1212 20:30:19.872567    2872 status.go:371] ha-309900-m02 host status = "Stopped" (err=<nil>)
	I1212 20:30:19.872567    2872 status.go:384] host is not running, skipping remaining checks
	I1212 20:30:19.872567    2872 status.go:176] ha-309900-m02 status: &{Name:ha-309900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:30:19.872567    2872 status.go:174] checking status of ha-309900-m03 ...
	I1212 20:30:19.880046    2872 cli_runner.go:164] Run: docker container inspect ha-309900-m03 --format={{.State.Status}}
	I1212 20:30:19.934763    2872 status.go:371] ha-309900-m03 host status = "Running" (err=<nil>)
	I1212 20:30:19.934763    2872 host.go:66] Checking if "ha-309900-m03" exists ...
	I1212 20:30:19.938790    2872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-309900-m03
	I1212 20:30:19.993024    2872 host.go:66] Checking if "ha-309900-m03" exists ...
	I1212 20:30:19.997660    2872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:30:20.001444    2872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-309900-m03
	I1212 20:30:20.055351    2872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57791 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-309900-m03\id_rsa Username:docker}
	I1212 20:30:20.192729    2872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:20.213622    2872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-309900
	I1212 20:30:20.267764    2872 kubeconfig.go:125] found "ha-309900" server: "https://127.0.0.1:57667"
	I1212 20:30:20.267823    2872 api_server.go:166] Checking apiserver status ...
	I1212 20:30:20.271926    2872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:30:20.294794    2872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	I1212 20:30:20.308920    2872 api_server.go:182] apiserver freezer: "7:freezer:/docker/5c584348f637884d37a2f5b262f932228278c01b8b3455315b68c671b4994e58/kubepods/burstable/podeda72ea08a9d6a64a2c424f8cc8ac3c3/e00ed1c9ff23bed37c8d465550e3b996e791d8a77e02460b7a1d9ba5e294e05a"
	I1212 20:30:20.313065    2872 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c584348f637884d37a2f5b262f932228278c01b8b3455315b68c671b4994e58/kubepods/burstable/podeda72ea08a9d6a64a2c424f8cc8ac3c3/e00ed1c9ff23bed37c8d465550e3b996e791d8a77e02460b7a1d9ba5e294e05a/freezer.state
	I1212 20:30:20.325714    2872 api_server.go:204] freezer state: "THAWED"
	I1212 20:30:20.325714    2872 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57667/healthz ...
	I1212 20:30:20.334970    2872 api_server.go:279] https://127.0.0.1:57667/healthz returned 200:
	ok
	I1212 20:30:20.334970    2872 status.go:463] ha-309900-m03 apiserver status = Running (err=<nil>)
	I1212 20:30:20.334970    2872 status.go:176] ha-309900-m03 status: &{Name:ha-309900-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:30:20.334970    2872 status.go:174] checking status of ha-309900-m04 ...
	I1212 20:30:20.341851    2872 cli_runner.go:164] Run: docker container inspect ha-309900-m04 --format={{.State.Status}}
	I1212 20:30:20.394431    2872 status.go:371] ha-309900-m04 host status = "Running" (err=<nil>)
	I1212 20:30:20.394431    2872 host.go:66] Checking if "ha-309900-m04" exists ...
	I1212 20:30:20.398596    2872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-309900-m04
	I1212 20:30:20.452645    2872 host.go:66] Checking if "ha-309900-m04" exists ...
	I1212 20:30:20.457994    2872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:30:20.461080    2872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-309900-m04
	I1212 20:30:20.519595    2872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57926 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-309900-m04\id_rsa Username:docker}
	I1212 20:30:20.652946    2872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:20.670551    2872 status.go:176] ha-309900-m04 status: &{Name:ha-309900-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5575121s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (103.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node start m02 --alsologtostderr -v 5
E1212 20:30:45.698903   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 node start m02 --alsologtostderr -v 5: (1m41.6135771s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.937663s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (103.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0382711s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (302.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 stop --alsologtostderr -v 5
E1212 20:32:31.875399   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 stop --alsologtostderr -v 5: (39.1377609s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 start --wait true --alsologtostderr -v 5
E1212 20:34:54.794065   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:35:17.985834   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 start --wait true --alsologtostderr -v 5: (4m23.5178458s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (302.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 node delete m03 --alsologtostderr -v 5: (12.4958114s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.5332089s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4960354s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 stop --alsologtostderr -v 5
E1212 20:37:31.879514   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 stop --alsologtostderr -v 5: (35.1505639s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: exit status 7 (316.8089ms)

                                                
                                                
-- stdout --
	ha-309900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-309900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-309900-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:38:02.123150    7612 out.go:360] Setting OutFile to fd 1824 ...
	I1212 20:38:02.165708    7612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:38:02.165708    7612 out.go:374] Setting ErrFile to fd 1996...
	I1212 20:38:02.165708    7612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:38:02.175706    7612 out.go:368] Setting JSON to false
	I1212 20:38:02.175706    7612 mustload.go:66] Loading cluster: ha-309900
	I1212 20:38:02.176707    7612 notify.go:221] Checking for updates...
	I1212 20:38:02.176707    7612 config.go:182] Loaded profile config "ha-309900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 20:38:02.176707    7612 status.go:174] checking status of ha-309900 ...
	I1212 20:38:02.184714    7612 cli_runner.go:164] Run: docker container inspect ha-309900 --format={{.State.Status}}
	I1212 20:38:02.233709    7612 status.go:371] ha-309900 host status = "Stopped" (err=<nil>)
	I1212 20:38:02.233709    7612 status.go:384] host is not running, skipping remaining checks
	I1212 20:38:02.233709    7612 status.go:176] ha-309900 status: &{Name:ha-309900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:38:02.233709    7612 status.go:174] checking status of ha-309900-m02 ...
	I1212 20:38:02.239708    7612 cli_runner.go:164] Run: docker container inspect ha-309900-m02 --format={{.State.Status}}
	I1212 20:38:02.289708    7612 status.go:371] ha-309900-m02 host status = "Stopped" (err=<nil>)
	I1212 20:38:02.289708    7612 status.go:384] host is not running, skipping remaining checks
	I1212 20:38:02.289708    7612 status.go:176] ha-309900-m02 status: &{Name:ha-309900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:38:02.289708    7612 status.go:174] checking status of ha-309900-m04 ...
	I1212 20:38:02.296708    7612 cli_runner.go:164] Run: docker container inspect ha-309900-m04 --format={{.State.Status}}
	I1212 20:38:02.344710    7612 status.go:371] ha-309900-m04 host status = "Stopped" (err=<nil>)
	I1212 20:38:02.344710    7612 status.go:384] host is not running, skipping remaining checks
	I1212 20:38:02.345709    7612 status.go:176] ha-309900-m04 status: &{Name:ha-309900-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 start --wait true --alsologtostderr -v 5 --driver=docker: (1m19.5053113s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.4154011s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6429844s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (96.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 node add --control-plane --alsologtostderr -v 5
E1212 20:39:54.797264   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:40:17.990139   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 node add --control-plane --alsologtostderr -v 5: (1m34.462789s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-309900 status --alsologtostderr -v 5: (1.9240129s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (96.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0007836s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.00s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (48.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-178200 --driver=docker
E1212 20:41:41.070214   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-178200 --driver=docker: (48.5492784s)
--- PASS: TestImageBuild/serial/Setup (48.55s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-178200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-178200: (3.8876883s)
--- PASS: TestImageBuild/serial/NormalBuild (3.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-178200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-178200: (2.4212736s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-178200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-178200: (1.1925775s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-178200
E1212 20:42:14.960179   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-178200: (1.2338875s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-664200 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1212 20:42:31.883040   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-664200 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m16.6780321s)
--- PASS: TestJSONOutput/start/Command (76.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.13s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-664200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-664200 --output=json --user=testUser: (1.1292424s)
--- PASS: TestJSONOutput/pause/Command (1.13s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-664200 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.88s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-664200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-664200 --output=json --user=testUser: (12.18064s)
--- PASS: TestJSONOutput/stop/Command (12.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.67s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-919900 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-919900 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (196.3465ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49d59c62-38b2-45ec-b1ac-065088b2eeb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-919900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"28f4d240-05d9-45a9-a25a-c5193566ea83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"62d4b2f8-f816-41b4-978d-08f2efb0b782","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"310bca6c-58b2-4d23-b521-98ebb14e8d6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"3516acba-56a7-4cec-8086-30cfee88019d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"a7d099f1-7373-45a0-9232-72bf2678577a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff50f5fc-f9b4-488c-9d26-3a9435e4b54b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-919900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-919900
--- PASS: TestErrorJSONOutput (0.67s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (53.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-927100 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-927100 --network=: (49.8196021s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-927100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-927100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-927100: (3.5859955s)
--- PASS: TestKicCustomNetwork/create_custom_network (53.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (53.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-467200 --network=bridge
E1212 20:44:54.802440   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:45:17.994172   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-467200 --network=bridge: (50.181746s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-467200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-467200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-467200: (3.1878196s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (53.43s)

                                                
                                    
x
+
TestKicExistingNetwork (54.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 20:45:39.924459   13396 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 20:45:39.981155   13396 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 20:45:39.985347   13396 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 20:45:39.985386   13396 cli_runner.go:164] Run: docker network inspect existing-network
W1212 20:45:40.043289   13396 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 20:45:40.043289   13396 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 20:45:40.043289   13396 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 20:45:40.047948   13396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 20:45:40.119492   13396 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001846870}
I1212 20:45:40.119492   13396 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1212 20:45:40.126245   13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1212 20:45:40.183969   13396 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1212 20:45:40.183969   13396 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1212 20:45:40.183969   13396 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1212 20:45:40.202948   13396 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1212 20:45:40.218625   13396 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b75680}
I1212 20:45:40.218625   13396 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 20:45:40.223690   13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 20:45:40.363905   13396 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-905700 --network=existing-network
E1212 20:46:17.881143   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-905700 --network=existing-network: (50.7112259s)
helpers_test.go:176: Cleaning up "existing-network-905700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-905700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-905700: (3.2023762s)
I1212 20:46:34.350184   13396 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (54.49s)

                                                
                                    
x
+
TestKicCustomSubnet (54.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-834800 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-834800 --subnet=192.168.60.0/24: (50.5514693s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-834800 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-834800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-834800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-834800: (3.4289281s)
--- PASS: TestKicCustomSubnet (54.05s)

                                                
                                    
x
+
TestKicStaticIP (54.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-789600 --static-ip=192.168.200.200
E1212 20:47:31.886306   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-789600 --static-ip=192.168.200.200: (50.4796637s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-789600 ip
helpers_test.go:176: Cleaning up "static-ip-789600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-789600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-789600: (3.5792374s)
--- PASS: TestKicStaticIP (54.38s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (98.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-188700 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-188700 --driver=docker: (44.9433637s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-188700 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-188700 --driver=docker: (43.7838634s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-188700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1782012s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-188700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.195882s)
helpers_test.go:176: Cleaning up "second-188700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-188700
E1212 20:49:54.805606   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-188700: (3.6118751s)
helpers_test.go:176: Cleaning up "first-188700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-188700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-188700: (3.6084936s)
--- PASS: TestMinikubeProfile (98.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-071700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3121709011\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-071700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3121709011\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.8546647s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-071700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-071700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3121709011\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E1212 20:50:17.998241   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-071700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3121709011\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.5495894s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-071700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.55s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-071700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-071700 --alsologtostderr -v=5: (2.4293864s)
--- PASS: TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-071700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.55s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-071700
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-071700: (1.866306s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-071700
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-071700: (9.7999025s)
--- PASS: TestMountStart/serial/RestartStopped (10.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-071700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1212 20:52:31.890885   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m10.0975869s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- rollout status deployment/busybox: (3.3312686s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-c46kt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-llx96 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-c46kt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-llx96 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-c46kt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-llx96 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-c46kt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-c46kt -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-llx96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-184800 -- exec busybox-7b57f96db7-llx96 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-184800 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-184800 -v=5 --alsologtostderr: (52.3501987s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr: (1.3101468s)
--- PASS: TestMultiNode/serial/AddNode (53.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-184800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3895833s)
--- PASS: TestMultiNode/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 status --output json --alsologtostderr: (1.319368s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp testdata\cp-test.txt multinode-184800:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2143042154\001\cp-test_multinode-184800.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800:/home/docker/cp-test.txt multinode-184800-m02:/home/docker/cp-test_multinode-184800_multinode-184800-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test_multinode-184800_multinode-184800-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800:/home/docker/cp-test.txt multinode-184800-m03:/home/docker/cp-test_multinode-184800_multinode-184800-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test_multinode-184800_multinode-184800-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp testdata\cp-test.txt multinode-184800-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2143042154\001\cp-test_multinode-184800-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m02:/home/docker/cp-test.txt multinode-184800:/home/docker/cp-test_multinode-184800-m02_multinode-184800.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test_multinode-184800-m02_multinode-184800.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m02:/home/docker/cp-test.txt multinode-184800-m03:/home/docker/cp-test_multinode-184800-m02_multinode-184800-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test_multinode-184800-m02_multinode-184800-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp testdata\cp-test.txt multinode-184800-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2143042154\001\cp-test_multinode-184800-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m03:/home/docker/cp-test.txt multinode-184800:/home/docker/cp-test_multinode-184800-m03_multinode-184800.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800 "sudo cat /home/docker/cp-test_multinode-184800-m03_multinode-184800.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 cp multinode-184800-m03:/home/docker/cp-test.txt multinode-184800-m02:/home/docker/cp-test_multinode-184800-m03_multinode-184800-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 ssh -n multinode-184800-m02 "sudo cat /home/docker/cp-test_multinode-184800-m03_multinode-184800-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 node stop m03: (1.8224866s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-184800 status: exit status 7 (1.0494884s)

                                                
                                                
-- stdout --
	multinode-184800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-184800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-184800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr: exit status 7 (1.0167366s)

                                                
                                                
-- stdout --
	multinode-184800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-184800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-184800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:54:27.026481    6444 out.go:360] Setting OutFile to fd 1344 ...
	I1212 20:54:27.068477    6444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:54:27.068477    6444 out.go:374] Setting ErrFile to fd 1296...
	I1212 20:54:27.068477    6444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:54:27.079479    6444 out.go:368] Setting JSON to false
	I1212 20:54:27.079479    6444 mustload.go:66] Loading cluster: multinode-184800
	I1212 20:54:27.079479    6444 notify.go:221] Checking for updates...
	I1212 20:54:27.080473    6444 config.go:182] Loaded profile config "multinode-184800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 20:54:27.080473    6444 status.go:174] checking status of multinode-184800 ...
	I1212 20:54:27.087474    6444 cli_runner.go:164] Run: docker container inspect multinode-184800 --format={{.State.Status}}
	I1212 20:54:27.139476    6444 status.go:371] multinode-184800 host status = "Running" (err=<nil>)
	I1212 20:54:27.139476    6444 host.go:66] Checking if "multinode-184800" exists ...
	I1212 20:54:27.142473    6444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-184800
	I1212 20:54:27.192473    6444 host.go:66] Checking if "multinode-184800" exists ...
	I1212 20:54:27.197472    6444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:54:27.200493    6444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-184800
	I1212 20:54:27.249474    6444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59085 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-184800\id_rsa Username:docker}
	I1212 20:54:27.377637    6444 ssh_runner.go:195] Run: systemctl --version
	I1212 20:54:27.394519    6444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:54:27.416350    6444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-184800
	I1212 20:54:27.472963    6444 kubeconfig.go:125] found "multinode-184800" server: "https://127.0.0.1:59089"
	I1212 20:54:27.473061    6444 api_server.go:166] Checking apiserver status ...
	I1212 20:54:27.479510    6444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:54:27.505641    6444 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2270/cgroup
	I1212 20:54:27.518565    6444 api_server.go:182] apiserver freezer: "7:freezer:/docker/416e33decd4b431f2c2aa61228381955c71e7ab187ff52bc8cddcd7d78f7bcac/kubepods/burstable/pod3d31315066926b38bb4b3941bbe5880c/fbdaa8fc00d83ec3da1d0ac27e67f15eb5f2c6821f00f50f58ffff6bf80f5ecf"
	I1212 20:54:27.524047    6444 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/416e33decd4b431f2c2aa61228381955c71e7ab187ff52bc8cddcd7d78f7bcac/kubepods/burstable/pod3d31315066926b38bb4b3941bbe5880c/fbdaa8fc00d83ec3da1d0ac27e67f15eb5f2c6821f00f50f58ffff6bf80f5ecf/freezer.state
	I1212 20:54:27.537848    6444 api_server.go:204] freezer state: "THAWED"
	I1212 20:54:27.537848    6444 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59089/healthz ...
	I1212 20:54:27.551748    6444 api_server.go:279] https://127.0.0.1:59089/healthz returned 200:
	ok
	I1212 20:54:27.551748    6444 status.go:463] multinode-184800 apiserver status = Running (err=<nil>)
	I1212 20:54:27.551748    6444 status.go:176] multinode-184800 status: &{Name:multinode-184800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:54:27.551748    6444 status.go:174] checking status of multinode-184800-m02 ...
	I1212 20:54:27.558746    6444 cli_runner.go:164] Run: docker container inspect multinode-184800-m02 --format={{.State.Status}}
	I1212 20:54:27.614376    6444 status.go:371] multinode-184800-m02 host status = "Running" (err=<nil>)
	I1212 20:54:27.614376    6444 host.go:66] Checking if "multinode-184800-m02" exists ...
	I1212 20:54:27.618436    6444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-184800-m02
	I1212 20:54:27.672921    6444 host.go:66] Checking if "multinode-184800-m02" exists ...
	I1212 20:54:27.678690    6444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:54:27.681417    6444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-184800-m02
	I1212 20:54:27.737260    6444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-184800-m02\id_rsa Username:docker}
	I1212 20:54:27.873917    6444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:54:27.891400    6444 status.go:176] multinode-184800-m02 status: &{Name:multinode-184800-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:54:27.891489    6444 status.go:174] checking status of multinode-184800-m03 ...
	I1212 20:54:27.898620    6444 cli_runner.go:164] Run: docker container inspect multinode-184800-m03 --format={{.State.Status}}
	I1212 20:54:27.952645    6444 status.go:371] multinode-184800-m03 host status = "Stopped" (err=<nil>)
	I1212 20:54:27.952645    6444 status.go:384] host is not running, skipping remaining checks
	I1212 20:54:27.952645    6444 status.go:176] multinode-184800-m03 status: &{Name:multinode-184800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 node start m03 -v=5 --alsologtostderr: (11.8163678s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 status -v=5 --alsologtostderr: (1.2914371s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-184800
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-184800
E1212 20:54:54.809881   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-184800: (24.9358722s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true -v=5 --alsologtostderr
E1212 20:55:18.002508   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true -v=5 --alsologtostderr: (1m0.6868817s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-184800
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 node delete m03: (6.9409585s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-184800 stop: (23.4316501s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-184800 status: exit status 7 (283.9173ms)

                                                
                                                
-- stdout --
	multinode-184800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-184800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr: exit status 7 (292.478ms)

                                                
                                                
-- stdout --
	multinode-184800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-184800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:56:39.216261   10032 out.go:360] Setting OutFile to fd 1308 ...
	I1212 20:56:39.258254   10032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:56:39.258254   10032 out.go:374] Setting ErrFile to fd 1260...
	I1212 20:56:39.258254   10032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:56:39.269257   10032 out.go:368] Setting JSON to false
	I1212 20:56:39.269257   10032 mustload.go:66] Loading cluster: multinode-184800
	I1212 20:56:39.269257   10032 notify.go:221] Checking for updates...
	I1212 20:56:39.269257   10032 config.go:182] Loaded profile config "multinode-184800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1212 20:56:39.269257   10032 status.go:174] checking status of multinode-184800 ...
	I1212 20:56:39.277254   10032 cli_runner.go:164] Run: docker container inspect multinode-184800 --format={{.State.Status}}
	I1212 20:56:39.337065   10032 status.go:371] multinode-184800 host status = "Stopped" (err=<nil>)
	I1212 20:56:39.337065   10032 status.go:384] host is not running, skipping remaining checks
	I1212 20:56:39.337065   10032 status.go:176] multinode-184800 status: &{Name:multinode-184800 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:56:39.337065   10032 status.go:174] checking status of multinode-184800-m02 ...
	I1212 20:56:39.344348   10032 cli_runner.go:164] Run: docker container inspect multinode-184800-m02 --format={{.State.Status}}
	I1212 20:56:39.402509   10032 status.go:371] multinode-184800-m02 host status = "Stopped" (err=<nil>)
	I1212 20:56:39.402509   10032 status.go:384] host is not running, skipping remaining checks
	I1212 20:56:39.402509   10032 status.go:176] multinode-184800-m02 status: &{Name:multinode-184800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true -v=5 --alsologtostderr --driver=docker
E1212 20:57:31.895840   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-184800 --wait=true -v=5 --alsologtostderr --driver=docker: (55.2642335s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-184800 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-184800
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-184800-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-184800-m02 --driver=docker: exit status 14 (218.0828ms)

                                                
                                                
-- stdout --
	* [multinode-184800-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-184800-m02' is duplicated with machine name 'multinode-184800-m02' in profile 'multinode-184800'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-184800-m03 --driver=docker
E1212 20:58:21.087681   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-184800-m03 --driver=docker: (45.1537347s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-184800
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-184800: exit status 80 (650.6725ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-184800 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-184800-m03 already exists in multinode-184800-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_15.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-184800-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-184800-m03: (3.6361971s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.81s)

                                                
                                    
x
+
TestPreload (159.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-898200 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
E1212 20:58:54.977307   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:59:54.814718   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-898200 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m34.3888397s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-898200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-898200 image pull gcr.io/k8s-minikube/busybox: (2.1124417s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-898200
E1212 21:00:18.007289   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-898200: (12.0775048s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-898200 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-898200 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (47.2609784s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-898200 image list
helpers_test.go:176: Cleaning up "test-preload-898200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-898200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-898200: (3.6327456s)
--- PASS: TestPreload (159.98s)

                                                
                                    
x
+
TestScheduledStopWindows (113.76s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-741800 --memory=3072 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-741800 --memory=3072 --driver=docker: (47.5646268s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-741800 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-741800 -n scheduled-stop-741800
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-741800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-741800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-741800 --schedule 5s: (1.0051252s)
minikube stop output:

                                                
                                                
E1212 21:02:31.899613   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:02:57.898227   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-741800
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-741800: exit status 7 (217.9391ms)

                                                
                                                
-- stdout --
	scheduled-stop-741800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-741800 -n scheduled-stop-741800
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-741800 -n scheduled-stop-741800: exit status 7 (211.5475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-741800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-741800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-741800: (2.4832182s)
--- PASS: TestScheduledStopWindows (113.76s)

                                                
                                    
x
+
TestInsufficientStorage (28.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-159700 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-159700 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (24.9343797s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39aea02d-07e1-4d4a-84a4-63710589bc75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-159700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"07bfaebe-f20e-4734-b2b6-add0d3189a58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"4d8ea55b-3c3e-4c45-9db7-f0db8c3e4461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ee6af90-8902-44e0-b4f3-9ad395989b04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fd3b7350-8fe1-4416-bcd8-4bbc670f8145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"200151b2-d124-4ad6-9b28-99142cabc7d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"352a9cb7-8c04-4cda-be8e-d80afa7f3511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fa2fe494-3f6c-47fa-9bcf-54968df2d580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d8133f52-0158-42f9-845b-598516b5da28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8f8234b-fc3f-47a9-b974-aabd1fa26db0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"94589758-a2ff-49f4-ade8-0a0eee01d4dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-159700\" primary control-plane node in \"insufficient-storage-159700\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c7efad8-0dc6-49f2-a974-fc099d44fa7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765505794-22112 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"90b182cf-3933-4c41-9396-ab1a4fdc57e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"959fedb2-26dd-43f8-b8f9-6e50a125c1c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-159700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-159700 --output=json --layout=cluster: exit status 7 (580.1171ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-159700","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-159700","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:03:31.538818   12708 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-159700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-159700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-159700 --output=json --layout=cluster: exit status 7 (563.7215ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-159700","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-159700","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:03:32.103651    4932 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-159700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1212 21:03:32.126805    4932 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-159700\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-159700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-159700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-159700: (2.677553s)
--- PASS: TestInsufficientStorage (28.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (128.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2707417540.exe start -p running-upgrade-296700 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2707417540.exe start -p running-upgrade-296700 --memory=3072 --vm-driver=docker: (1m1.9256693s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-296700 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-296700 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m1.9124232s)
helpers_test.go:176: Cleaning up "running-upgrade-296700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-296700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-296700: (3.8678811s)
--- PASS: TestRunningBinaryUpgrade (128.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2685059870.exe start -p missing-upgrade-621100 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2685059870.exe start -p missing-upgrade-621100 --memory=3072 --driver=docker: (1m2.4653379s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-621100
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-621100: (2.3668687s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-621100
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-621100 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-621100 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m12.0280409s)
helpers_test.go:176: Cleaning up "missing-upgrade-621100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-621100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-621100: (3.7154807s)
--- PASS: TestMissingContainerUpgrade (141.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (281.7008ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-601600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m40.5004282s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-601600 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (410.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.776878831.exe start -p stopped-upgrade-900300 --memory=3072 --vm-driver=docker
E1212 21:04:54.818778   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.776878831.exe start -p stopped-upgrade-900300 --memory=3072 --vm-driver=docker: (1m48.6136651s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.776878831.exe -p stopped-upgrade-900300 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.776878831.exe -p stopped-upgrade-900300 stop: (12.2309216s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-900300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-900300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m49.759113s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (410.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (26.1602015s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-601600 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-601600 status -o json: exit status 2 (592.8499ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-601600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-601600
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-601600: (3.1283854s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (14.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (14.7023353s)
--- PASS: TestNoKubernetes/serial/Start (14.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-601600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-601600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (625.4697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (38.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (15.2116384s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (22.8231966s)
--- PASS: TestNoKubernetes/serial/ProfileList (38.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-601600
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-601600: (2.0076534s)
--- PASS: TestNoKubernetes/serial/Stop (2.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-601600 --driver=docker: (11.8890146s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-601600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-601600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (546.8523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                    
x
+
TestPause/serial/Start (81.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-269600 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-269600 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m21.4717281s)
--- PASS: TestPause/serial/Start (81.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-269600 --alsologtostderr -v=1 --driver=docker
E1212 21:10:18.016559   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-269600 --alsologtostderr -v=1 --driver=docker: (43.787773s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-900300
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-900300: (1.3200643s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Pause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-269600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-269600 --alsologtostderr -v=5: (1.1818932s)
--- PASS: TestPause/serial/Pause (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-269600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-269600 --output=json --layout=cluster: exit status 2 (643.0099ms)

                                                
                                                
-- stdout --
	{"Name":"pause-269600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-269600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-269600 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-269600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-269600 --alsologtostderr -v=5: (1.2140674s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.53s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-269600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-269600 --alsologtostderr -v=5: (5.5334391s)
--- PASS: TestPause/serial/DeletePaused (5.53s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.1525632s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-269600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-269600: exit status 1 (57.0005ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-269600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m24.2552852s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1212 21:12:31.909377   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m12.5825248s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-t7qz9" [d09d0a82-79db-4574-a9aa-14ff7a14f03f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0067877s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-864500 "pgrep -a kubelet"
I1212 21:13:05.069342   13396 config.go:182] Loaded profile config "auto-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8fhzw" [848aad01-9d21-496d-b77c-481bae76ae6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8fhzw" [848aad01-9d21-496d-b77c-481bae76ae6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.0066178s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-864500 "pgrep -a kubelet"
I1212 21:13:06.706565   13396 config.go:182] Loaded profile config "flannel-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gwwfj" [0b0e54aa-604c-4cbd-b226-d91b84966944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gwwfj" [0b0e54aa-604c-4cbd-b226-d91b84966944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.0058444s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m40.978005s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m33.1530265s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (96.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1212 21:14:54.828187   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-349200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:15:01.105403   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m36.8737312s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (96.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-864500 "pgrep -a kubelet"
I1212 21:15:12.959711   13396 config.go:182] Loaded profile config "enable-default-cni-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-f89wd" [a1ffd3e6-db01-4046-ba5a-b2f8b81c9c96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 21:15:18.020662   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-f89wd" [a1ffd3e6-db01-4046-ba5a-b2f8b81c9c96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.0113352s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-864500 "pgrep -a kubelet"
I1212 21:15:31.444615   13396 config.go:182] Loaded profile config "bridge-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-bn765" [66fe87bd-f1e9-4b0e-b2ee-52a53eddbde5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 21:15:34.994974   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-bn765" [66fe87bd-f1e9-4b0e-b2ee-52a53eddbde5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.0077521s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-864500 "pgrep -a kubelet"
I1212 21:15:37.881568   13396 config.go:182] Loaded profile config "kubenet-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-h96tr" [b1085481-5170-4423-a3b4-149bb56a4d4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-h96tr" [b1085481-5170-4423-a3b4-149bb56a4d4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.0067493s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m59.015887s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m20.1075931s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1212 21:17:31.914343   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-461000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m15.038612s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-7btkw" [3f291c9e-b89e-42de-89f1-bb6abaffa1a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0064826s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-864500 "pgrep -a kubelet"
I1212 21:17:47.468610   13396 config.go:182] Loaded profile config "custom-flannel-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-85vlr" [4f195c4b-4b07-47b8-91cf-43a5e67f4329] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-85vlr" [4f195c4b-4b07-47b8-91cf-43a5e67f4329] Running
E1212 21:18:00.148694   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.155507   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.167580   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.189555   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.231811   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.313632   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.475804   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:00.797855   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.0062092s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-864500 "pgrep -a kubelet"
I1212 21:17:51.227685   13396 config.go:182] Loaded profile config "kindnet-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-djxxb" [a40594ad-fb5f-4657-bfbd-363213f99182] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-djxxb" [a40594ad-fb5f-4657-bfbd-363213f99182] Running
E1212 21:18:05.284686   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.612584   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.619338   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.630886   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.653170   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.695620   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.777559   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:05.939273   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:06.261369   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:18:06.904163   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.0069278s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-7n7qg" [9fa3eac3-ff9d-4205-a314-076c8be9b5a7] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1212 21:18:01.440087   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "calico-node-7n7qg" [9fa3eac3-ff9d-4205-a314-076c8be9b5a7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0063737s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-864500 "pgrep -a kubelet"
I1212 21:18:07.731420   13396 config.go:182] Loaded profile config "calico-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-864500 replace --force -f testdata\netcat-deployment.yaml
E1212 21:18:08.185800   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xp86j" [35fa8aa4-fddc-4613-82c6-cf9de0db91ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xp86j" [35fa8aa4-fddc-4613-82c6-cf9de0db91ce] Running
E1212 21:18:20.649650   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.0074672s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (102.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-864500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m42.4237837s)
--- PASS: TestNetworkPlugins/group/false/Start (102.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (108.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-246400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1212 21:18:46.597155   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-246400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m48.2098286s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (108.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
E1212 21:20:13.425429   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.432539   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.444549   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.466796   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.508612   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.590644   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:13.753042   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:14.075363   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:14.717128   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:15.999623   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:18.025512   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-468800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:18.561725   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (1m21.1343565s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-864500 "pgrep -a kubelet"
I1212 21:20:22.216644   13396 config.go:182] Loaded profile config "false-864500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-864500 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6586h" [018dcf4d-0193-4a31-b18f-2d56fec0eab4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 21:20:23.684439   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-6586h" [018dcf4d-0193-4a31-b18f-2d56fec0eab4] Running
E1212 21:20:31.896699   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:31.903895   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:31.915656   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:31.936839   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:31.978610   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:32.059994   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:32.222430   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:32.544615   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:33.186401   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.0077245s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-246400 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a2496887-7205-46f0-ac15-58c1dfb5a5c2] Pending
E1212 21:20:33.927527   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:34.468828   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a2496887-7205-46f0-ac15-58c1dfb5a5c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a2496887-7205-46f0-ac15-58c1dfb5a5c2] Running
E1212 21:20:38.440355   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.447126   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.458821   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.481063   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.523254   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.605450   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:38.767291   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:39.089860   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:39.732526   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:41.014441   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:42.154149   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0067474s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-246400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-864500 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-864500 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)
E1212 21:23:03.743361   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:05.093203   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:05.616632   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:06.305373   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:08.362889   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:11.427081   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:15.765748   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:21.670156   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:22.312332   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:25.575014   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:27.864765   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:28.845395   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:33.328605   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:23:42.153161   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-246400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 21:20:43.576613   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:44.019406   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-246400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4694766s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-246400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-246400 --alsologtostderr -v=3
E1212 21:20:48.699742   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:49.483578   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:52.396290   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:20:54.410446   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-246400 --alsologtostderr -v=3: (12.16566s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-246400 -n old-k8s-version-246400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-246400 -n old-k8s-version-246400: exit status 7 (239.1361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-246400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-246400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1212 21:20:58.942750   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-246400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (56.5353134s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-246400 -n old-k8s-version-246400
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1212 21:21:12.878625   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:21:19.426090   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m22.7095972s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-729900 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [234d0539-ee4d-43e3-b2c6-390b5f99b0fc] Pending
helpers_test.go:353: "busybox" [234d0539-ee4d-43e3-b2c6-390b5f99b0fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [234d0539-ee4d-43e3-b2c6-390b5f99b0fc] Running
E1212 21:21:35.373522   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0052891s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-729900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-729900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3450387s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-729900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-729900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-729900 --alsologtostderr -v=3: (12.4515714s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-729900 -n embed-certs-729900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-729900 -n embed-certs-729900: exit status 7 (231.7715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-729900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
E1212 21:21:53.842015   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-729900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (48.9988847s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-729900 -n embed-certs-729900
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9nj55" [c0689201-396f-41b2-bcef-56b8708a4eb1] Running
E1212 21:22:00.388428   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0054235s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9nj55" [c0689201-396f-41b2-bcef-56b8708a4eb1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0074359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-246400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-246400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-246400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-246400 --alsologtostderr -v=1: (1.2024972s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-246400 -n old-k8s-version-246400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-246400 -n old-k8s-version-246400: exit status 2 (631.2456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-246400 -n old-k8s-version-246400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-246400 -n old-k8s-version-246400: exit status 2 (639.8615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-246400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-246400 -n old-k8s-version-246400
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-246400 -n old-k8s-version-246400
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-124600 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Done: kubectl --context default-k8s-diff-port-124600 create -f testdata\busybox.yaml: (1.056982s)
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [27917890-52fb-4508-9d1b-bb4aad2ba85e] Pending
helpers_test.go:353: "busybox" [27917890-52fb-4508-9d1b-bb4aad2ba85e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [27917890-52fb-4508-9d1b-bb4aad2ba85e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.010864s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-124600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-24hc9" [a578332c-07f6-4870-8205-732cc1c83baf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0067337s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 21:22:44.591621   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.598417   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.609750   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.632076   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.673569   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.756200   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.918065   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-124600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3457253s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-124600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-124600 --alsologtostderr -v=3
E1212 21:22:45.240418   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:45.882840   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-124600 --alsologtostderr -v=3: (12.2996144s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-24hc9" [a578332c-07f6-4870-8205-732cc1c83baf] Running
E1212 21:22:47.165691   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:47.862488   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:47.869293   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:47.880752   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:47.902743   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:47.944214   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:48.026581   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:48.188933   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:48.510867   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:49.152998   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:49.727829   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:50.435191   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0072976s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-729900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-729900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-729900 --alsologtostderr -v=1
E1212 21:22:52.997538   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-729900 --alsologtostderr -v=1: (1.1996901s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-729900 -n embed-certs-729900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-729900 -n embed-certs-729900: exit status 2 (633.0794ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-729900 -n embed-certs-729900
E1212 21:22:54.850822   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-729900 -n embed-certs-729900: exit status 2 (598.0723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-729900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-729900 --alsologtostderr -v=1: (1.0525074s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-729900 -n embed-certs-729900
E1212 21:22:57.296960   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-729900 -n embed-certs-729900
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600: exit status 7 (232.5316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-124600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1212 21:22:58.120329   13396 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-864500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-124600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (48.2301979s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2hbsz" [86aeb789-8620-452a-a0e9-c0083002da75] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0188741s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2hbsz" [86aeb789-8620-452a-a0e9-c0083002da75] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0076853s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-124600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-124600 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-124600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-124600 --alsologtostderr -v=1: (1.1003883s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600: exit status 2 (628.2459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600: exit status 2 (636.8235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-124600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600: (1.0163566s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-124600 -n default-k8s-diff-port-124600
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-285600 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-285600 --alsologtostderr -v=3: (1.8528293s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-285600 -n no-preload-285600: exit status 7 (204.2321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-285600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-449900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-449900 --alsologtostderr -v=3: (1.8833395s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-449900 -n newest-cni-449900: exit status 7 (210.7134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-449900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-449900 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    

Test skip (35/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 22.87
46 TestAddons/parallel/Ingress 27.29
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.01
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 44.44
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 0.54
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
374 TestNetworkPlugins/group/cilium 9.84
391 TestStartStop/group/disable-driver-mounts 0.5
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.4248ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-nz8dt" [23918f7d-8dcb-4799-8af5-64c7e3ae6c55] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0124206s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ljg5l" [3c265ea5-f345-42be-841c-c92282022381] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0053706s
addons_test.go:394: (dbg) Run:  kubectl --context addons-349200 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-349200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-349200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.5019247s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable registry --alsologtostderr -v=1: (1.2051922s)
--- SKIP: TestAddons/parallel/Registry (22.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-349200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-349200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-349200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e0efc6b2-3ff5-4ad9-8110-88876d26d00b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e0efc6b2-3ff5-4ad9-8110-88876d26d00b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0065408s
I1212 19:36:45.641268   13396 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable ingress-dns --alsologtostderr -v=1: (2.0228959s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-349200 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-349200 addons disable ingress --alsologtostderr -v=1: (8.4242858s)
--- SKIP: TestAddons/parallel/Ingress (27.29s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-461000 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-461000 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 7280: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (44.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-461000 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-461000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-m9bpz" [1cde0621-d640-4eec-ae76-1206133c63f2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-m9bpz" [1cde0621-d640-4eec-ae76-1206133c63f2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 44.0071246s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (44.44s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-468800 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-468800 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 4464: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-864500 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-864500" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-864500

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-864500" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864500"

                                                
                                                
----------------------- debugLogs end: cilium-864500 [took: 9.3952867s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-864500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-864500
--- SKIP: TestNetworkPlugins/group/cilium (9.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-453700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-453700
--- SKIP: TestStartStop/group/disable-driver-mounts (0.50s)

                                                
                                    
Copied to clipboard